OpenClaw Creator Spent $1.3M on OpenAI Tokens in 30 Days

(twitter.com)

107 points | by eamag 8 hours ago ago

121 comments

  • mattbrewsbytes 17 minutes ago ago

    In the dot com boom there were companies spending like $100+ on ads per $1 of revenue. The cost of customer acquisition was insanely high because of the hype of ecommerce and it was being subsidized by VC and IPO's.

    This AI boom feels similar, a lot of hype and the AI usage costs are being subsidized by private equity/VC so far. IPO's are supposed to happen this fall for OpenAI and Anthropic. They're going to have to face the music of corporate governance, accounting rules, reporting revenue, earnings, etc. Subsidizing users seems unsustainable, they need to either jack up rates or downgrade usage per plans. Then there is the circular investments between all of them and Google, Microsoft, etc. Seems like a house of cards.

  • Tiberium 7 hours ago ago

    This is quite a misleading title because this is the raw API cost, but he (obviously) has unlimited usage as an OpenAI employee. Moreover, if you use e.g. the $200 Codex sub, you get about ~$5k-$6k monthly API usage if you spend every week of your usage, if not more, which shows that the raw API cost is not how much it (likely) costs to OpenAI, unless they're subsidizing all this.

    He did clarify that it was with fast mode. Without fast mode it'd "only" be $300k in raw API cost, or ~60 $200 Codex subscriptions.

    • MattDaEskimo 7 hours ago ago

      How is it misleading if this would be the consumer's cost?

      Eventually Codex's subscription subsidization will diminish to near-zero, like the rest of the providers.

      It's extremely important that people understand how expensive these models currently are. Even $300k in raw API costs is alarming for the output.

      • pama 6 hours ago ago

        Peter shows the near-term future. Raw API consumer price cost is arbitrary. (The frontier labs can put a 100x markup to cover other operational expenses.) The true cost of inference with same-capability models keeps dropping at dizzying rates, especially at the data-center batch size. (Due to both NVidia hardware and algorithmic changes.) So the developments that Peter can achieve today with internal support from OpenAI will be doable by anyone in a few years without breaking the bank.

        • vrganj 6 hours ago ago

          But.... why? Like I read his thing on how he spends the tokens [0] and it sounds like satire.

          He has agents write shitty code for features other agents think other people want, then has it reviewed by other agents in hopes of catching bugs that the first agent put there, then has some more agents try to find security bugs in the now double-agented code to make it triple-agented and at the end of the day, he spent a shitton of tokens, probably emitted enough carbon to heat our planet by another degree, and has a feature nobody really asked for that might or might not work.

          He then has the sense of humor to call this grotesque process "incredibly lean".

          What's the point in all of this? What problems is this solving? Who's benefiting?

          [0] https://xcancel.com/steipete/status/2055405041843052792

          • browningstreet 5 hours ago ago

            I don’t use openclaw myself anymore, but this agonizing is thin and unbearable. He did a thing. People use the thing. He got paid for the thing. He iterates the thing. What’s hard to understand about this?

            The morality issues about consumption climate impacts are not his alone, and are not unique by itself to his endeavor. Every company with an enterprise LLM agreement has a share, for instance.

          • VerTiGo_Etrex 5 hours ago ago

            But this is okay?

            “He has /people/ write shitty code for features other /people/ think other people want, then has it reviewed by other /people/ in hopes of catching bugs that the first /people/ put there, then has some more /people/ try to find security bugs in the now /double-peopled/ code to make it /triple-peopled/ and at the end of the day, he spent a shitton of /money, the people/ probably emitted enough carbon to heat our planet by another degree, and has a feature nobody really asked for that might or might not work.”

            Honestly sounds like a normal tech company to me. Just with much dumber “people” who are getting exponentially smarter, eventually never die, eventually never forget.

            You have to skate to where the puck is going, not where it is.

            • wasJka 5 hours ago ago

              Congratulations! You are the winner of the 2026 "humans do it, too" award for the most primitive substitution of an original argument.

          • simianwords 5 hours ago ago

            >He then has the sense of humor to call this grotesque process "incredibly lean".

            > What's the point in all of this? What problems is this solving? Who's benefiting?

            The economy doesn't work like how you think it does. Its not central planning. All the usages aren't detailed in a specification, submitted for approval to 100 agencies and then allowed to be used.

            It shows lack of intellectual curiosity to not engage deeply with obviously profound technology and what the implications are. I find this exercise helpful.

            Peter is predicting how LLMs will be used in the future when the prices go down. And they will definitely go down. I think his predictions are correct and we will definitely have something similar to OpenClaw.

            • vrganj 5 hours ago ago

              > The economy doesn't work like how you think it does. Its not central planning.

              I'm aware. That is in fact my central critique. The way it works is incredibly wasteful of our limited resources, as illustrated by this guy burning through fuel during a time of crisis for no perceptible gain.

              > It shows lack of intellectual curiosity to not engage deeply with obviously profound technology and what the implications are.

              The "obviously profound" is an assertion without proof.

              The rest I agree with, we should engage with the implications of burning through energy to build features that bots think humans want, but nobody actually asked for, all while climate scientists are telling us we're heading for the apocalypse. It is intellectually incurious to just ignore the questions of why and at what cost, maybe even dangerously so.

            • w11qsh 5 hours ago ago

              Mario Zechner wrote the main part of this IP laundering application.

              I didn't know that studying photocopiers is suddenly linked to "intellectual curiosity". Being a photocopier maintenance guy was always considered boring.

              What you put on top of the machine was intellectually interesting.

      • namenotrequired 7 hours ago ago

        > How is it misleading if this would be the consumer's cost?

        Because it does not say “equivalent of”, it literally says he spent money that he did not spend

        • mathgeek 6 hours ago ago

          This. If I go up to my boss and say “I spent $10000 but it only cost us $1000” then I spent $1000.

    • therealpygon 4 hours ago ago

      Hey guys, I’m super good at using tokens.

      Business: Amazing, that’s great what did you do?

      I ran 50 instances and had them all fix the same bugs at the same time and then analyzed the results of all 50 runs to have AI score each of the attempts, then sort them, then compare them to each other in a round robin tournament style double elimination to ensure I got the best result. Then I had AI convert this into a skill, and then ran all 50 attempts again and repeated the process to ensure that I had the absolute best result. It was amazing and I used 1.3 billion tokens!

      Business: That is amazing! What did you fix?

      A spelling mistake on the About page.

    • irthomasthomas 4 hours ago ago

      I think its less misleading this way because every other reader would have to pay $1.3M to emulate his workflow for a similar size project. His discounted internal costs are relevent only to openai.

      • Tiberium 3 hours ago ago

        I did mention that you could use ~60 $200 Codex accounts to emulate his workflow without /fast, or 2.5x that if you used /fast. Not $1.3M

    • Terretta 7 hours ago ago

      Even at unlimited budget, there is a crossover where outsourcing thinking to the machine costs more than the machine.

      What I mean by this:

      1. Intern, analyst, junior, or offshore level coding is cheaper when done by the machine.

      // Side note: There is good reason the industry invests in suboptimal output from this set which moves to the "cost" column when using an LLM, but nobody's accounting for that.

      2. For the interns, analysts, junior, or offshoring to do the right thing costs a multiple of the coding effort: the PdM/PjM stuff of course, but also the Stakeholder, Product Owner, Architect, Principal Engineer, QA, and SRE stuff.

      3. If you are not a principal or staff engineer level engineer, you are likely unqualified to catch and fix the errors LLMs make across engineering, much less these other PDLC (product development lifecycle, which includes SDLC and SRE) loop.

      4. For LLM output to be useful, your 'harness' has to incorporate all of that as well, which because it's so much harder than transliterating spec-to-code, balloons tokens exponentially.

      5. Today it is faster, more efficient, and costs less, to work with LLMs "XP" (eXtreme Programming) style, pairing with the LLM actively co-creating and co-reviewing, steering for more effective turns.

      So, your options are:

      - ship garbage while costing less than a median first world SWE

      - pair with the LLM actively for the benefits of XP

      - add enough harness and steering the LLM costs more than SWEs, and still needs a human loop “move fast and break things to find out what's broken” style

      I would expect that within a couple years, these other disciplines can be baked in enough the machine costs less for everything but surprises.

      • Grosvenor 6 hours ago ago

        > I would expect that within a couple years, these other disciplines can be baked in enough the machine costs less for everything but surprises.

        They already are. I’m successfully using frameworks like bmad to deliver complex apps at that level. My job is to manager the see, as, ux, sre processes and catch errors.

        I spend more time refinding prd , epics and stories than I do elbows deep in code.

        If I don’t like the output of a story I nuke it change the story and have the flanker try again. I’m using the open source glm, kimi, deepseek models. I expect the full pipeline to be good enough by the end of the year.

    • otabdeveloper4 6 hours ago ago

      > unless they're subsidizing all this

      They literally are. (If by "all this" you mean the subscription future bait-and-switch plans.)

    • rvz 7 hours ago ago

      But even going with the $5k - $6k monthly usage on a $200 codex subscription even going over their limits is also unrealistic in the long term and that is just ONE person.

      Lets say I was at the casino and was spending a lot on casino chips but I also happen to work at the casino. I'm not really losing money whether if I win / lose since I'm using the houses money and there's little risk involved on every dice roll or press of the button. The risk is far higher if I don't have that level of access and continue to spend the same amount of money on lots of tokens (or casino chips, spins or button presses.)

      The same is true here with these agents. Some companies will realize that they can no longer afford to spend millions a month on tokens or even startups spending $5k - $6k per person per month on tokens.

      I can only see local efficient models making sense on recovering from this unnecessary spending or even light gambling on tokens.

  • Robdel12 7 hours ago ago

    Once you see how much crap they’re running to police the agents on the repo, you’ll ‘get’ the spend https://x.com/steipete/status/2055405041843052792

    I won’t lie, if I had the access to this, I’d do the same exact thing.

    • danpalmer 7 hours ago ago

      "All that automation allows us to run extremely lean"

      He has a different opinion of what it means to be lean than almost everyone else. That's fine, he's allowed to, but it's something you have to understand to make sense of any of his comments on things. He has a radically different set of values to most people.

      • vntok an hour ago ago

        His team is basically him and two other humans, powering an ambitious well-known project so successful an industry titan ended up acquihiring him/them. That's pretty lean, no?

        • cedws 34 minutes ago ago

          What’s ambitious about it? It’s a chatbot that glues APIs together.

    • Philip-J-Fry 7 hours ago ago

      But it's a self fulfilling prophecy. They need all this stuff because it's a vibe coded app where bugs are randomly introduced, the architecture is overcomplicated and sucks, and stuff is just added for the fun of it.

      Do existing companies run entire end-to-end product integration tests on every single change they make to a repo to make sure something hasn't broken? No, they just architect things in a way such that a minor change to something can be tested in isolation. And that can be automated, deterministically and efficiently.

      Where I work we can release changes to our production site in minutes almost completely autonomously with high confidence with absolutely zero AI agents in the loop. How did we do it? With lessons learned from the past 5 decades of professional software development experience.

      Lets not forget what OpenClaw is at it's core. It's a glorified cron scheduler. Why on earth does any of this effort need to exist. It's not that deep, it's not that complex, it's all AI for AI's sake.

      • H8crilA 6 hours ago ago

        OpenClaw has surprisingly few "dumb" bugs. Is it as stable and secure as the Linux kernel? God no, obviously not. But it has never just crashed for me, for example. Bugs are of the type "X with Y and Z disabled and T turned on - doesn't work", where you're likely one of a few people that have ever tried this combination. Not to mention it can then debug itself and file a bug report, with a bugfix - if you give it a GitHub token.

        I run it in a firewalled VM and am very conscious about any tokens I give it access to - so far for all I know this was unnecessary.

        PS. for me the core feature of OpenClaw isn't the cron, though that is nice. It's the memory and instant extensibility. Like it takes 5-15 minutes to add an SSH tool where all agent requests go through a manual review, together with a good auto loaded description that just works in all future sessions.

    • tedggh 7 hours ago ago

      Same mindset as Marc Andreessen when working on Mosaic: Design for infinite (Internet) bandwidth.

  • zxornand 7 hours ago ago

    And was he 5x more productive in those 30d than a years worth of a dev making 200k/yr?

    Doubtful lol, dudes killing the environment just for fun at this point.

    • wiseowise 7 hours ago ago

      > And was he 5x more productive in those 30d than a years worth of a dev making 200k/yr?

      He was. When it comes to marketing. This is was most people don't understand. Peter is a great marketing guy who got hired because of a hype vision, not because he is an outstanding engineer. Think of it like OpenAI hiring MrBeast of the coding world.

      • 12arwlaz 6 hours ago ago

        That is a good comparison. He has the charm and output quality of Mr Beast, but also his marketing prowess.

        Now let's wait until the moderators clean up the wrongthink. He also has censors on his side.

      • simianwords 5 hours ago ago

        _yawn_ I keep hearing "hype vision". What part of openclaw is hype? It literally works and the adoption has gotten better.

        We really need better standards for disagreement.

        • beepbooptheory 5 hours ago ago

          What is a standard for a disagreement?

      • Iolaum 6 hours ago ago

        I 'd say he is an outstanding engineer as well. He may favor output over security more than outstanding engineers at 2025 but in the 2026 world what he does is impressive. And with OpenAI's resources he has turned OpenClaw's security woes around. Latest versions are much more secure than 2 months ago.

        • sdevonoes 6 hours ago ago

          It’s not impressive. He’s a celebrity. Celebrities are not impressive

    • vessenes 7 hours ago ago

      If you review the openclaw release schedule and code output you will see that yes, he was. I’m not saying you’ll like what you see, but the openclaw release schedule is well faster than human ability to assess it.

      • Philip-J-Fry 7 hours ago ago

        With a lot of these AI tools yea, they release very often. But half the features they add aren't even that useful. They just add shit because they can and they introduce bugs and change behaviour all the time.

        Opencode has the same problems. They often do multiple releases of that app a day, yet within the span of a week or two I have had to update my config because some random change has altered the behaviour and my permissions broke. Or I've noticed the way the app renders is suddenly different.

        Yet, my day to day usage has barely changed since the version I installed last year. It's like everything changes but nothing changes.

        • freedomben 6 hours ago ago

          Even claude code has this happen, though perhaps to a lesser extent. I'm getting really tired of having new bugs pop up on me or subtle behavior change near daily that requires me to change things. The most annoying thing ever that was just introduced is a giant spew of context mode crap that Claude aggressively adds to every CLAUDE.md file, and I can't find a way to turn it off. I just have to `git checkout CLAUDE.md` repeatedely right now. If I have to add a bash alias to work around your annoying bug, that's pretty bad.

      • throwatdem12311 6 hours ago ago

        I read the OpenClaw subreddit for comedy. Every release just floods of posts about how everything is constantly broken and people stoping using it because of how broken it is.

      • risyachka 7 hours ago ago

        Thats the single reason it is faster. Just pushing to prod whatever.

        All projects can become fast if they drop guardrails.

        This does not correlate with productivity increase

      • SecretDreams 7 hours ago ago

        That's a metric for management to pump AI if I've ever seen one.

      • realusername 7 hours ago ago

        > the openclaw release schedule is well faster than human ability to assess it.

        That doesn't sound very positive to me...

        • vessenes 6 hours ago ago

          Oh I agree. They’ve said LTS is coming, that will be a relief. I wonder what “LTS” means in this context. Monthly? I’d settle for just not randomly dying on point version updates to config files TBF

      • rowanG077 7 hours ago ago

        It's fast for sure. But not 5 years of dev time compressed into 30 days fast.

      • minraws 7 hours ago ago

        I am not joking when I say this, if you pay me 1.3 million dollars today, I will get so much more done with just a single 200$ codex sub in 30 days than he has in 30 days, I can promise you that.

        I just checked the code and feature outputs, and I can build all that in 15 days, for 1.3M USD. Fuck I would do it for 1M...

        Scratch that, if it's 300K then sure I could do the same too, if you paid me that for 30 days of work. Lmao, the quality and the feature volume is just not worth anything worth paying so much money for.

        I am not saying this because I don't like LLMs or I may think that AI coding can't work, but folks whatever openclaw has built for that much money is not worth nearly that much money...

        • stephbook 7 hours ago ago

          I don't understand. Are you saying you're capable of building a rival to Openclaw in a few days, but you're just choosing not to? That's amazing.

          • hubertdinsk 6 hours ago ago

            everyone can build toys. Most people just have enough shame about publishing it.

            The hard part is not building such toys, it's the convincing people with money to buy said toy. This is where he earned his applause.

          • fg137 5 hours ago ago

            Plenty of people can build an OpenClaw rival. Making it viral, however, is a different skill.

          • therouwboat 6 hours ago ago

            I assume there is already bunch of openclaw rivals, so why bother? Its not like they all become super popular and get bought by openai.

          • vntok an hour ago ago

            His restraint alone is commandable.

  • vslira 7 hours ago ago

    Regardless of one’s opinion about AI, from a product perspective this seems somewhat similar to the dev using his 48gb ram machine and latest iphone to test an app that will be used by consumers with entry-level devices

  • tom1337890 7 hours ago ago

    After trying openclaw a bit myself, no wonder. Without the best models, capabilities drop significantly. And I guess he has a lot of automations and stuff, which explains the 19'000 daily spend. I hit my personal spend limit when it cost like 40 USD to get Google auth tokens working. Which is very complicated when you run openclaw on a vps. And it even broke like a week after. Maybe one could justify the 40usd if it would save my time instead. But I was babysitting openclaw doing it anyhow. So I actually double spend. Money plus time.

    Btw, same frustration for me setting up signal, Whatsapp or slack...

    • vessenes 7 hours ago ago

      It’s a moving target for sure. I’m excited for the LTS release series - keeping up with twice or three times weekly releases is not for humans :)

  • wolttam 7 hours ago ago

    Nobody here talking about what this represents for demand on these models, if these numbers aren’t made up.

    One person using 600B tokens in a month. The most I’ve hit is around 500M tokens and I thought that was a huge amount.

    We’re going to have some major compute shortages for a while

    • onion2k 7 hours ago ago

      Jensen Huang was saying humanity is going to need 1000x the current energy production in the future. He might not be wrong.

      • amunozo 4 hours ago ago

        Or more probably, he will be wrong. We really need to stop amplifying marketing statements like the bullshit Huang and Amodei tell all the time. There's no much thought behind them, just marketing and wishes.

    • voidfunc 7 hours ago ago

      500m tokens is easy... I'm burning about 2b a week.

      • danpalmer 7 hours ago ago

        Anyone can burn tokens. Using them for something useful is the hard part.

        • voidfunc 7 hours ago ago

          Im pretty confident its useful :p

  • mtct88 8 hours ago ago

    It's a very peculiar way to flex.

    • Avicebron 7 hours ago ago

      It's like the nerd equivalent of rolling coal?

    • Ekaros 7 hours ago ago

      Lot of online presence seems to be tied to consumerism. That is consuming anything, more ostentatious the better. This is just specific digital version of that.

    • discordance 7 hours ago ago

      I work at a bigtech and we’re being measured on how many tokens we consume.

      We know it’s totally stupid, but unfortunately tokenmaxxing is real. I know our management line isn’t that dumb, but this is what you get when the business is selling it.

  • athrow 7 hours ago ago

    What does he have to show for it?

  • thomasahle 7 hours ago ago

    He used 600B tokens in 30 days.

    I use more than 150B/month with just 15 codex accounts.

    60 accounts is "just" $12,000/month. So Peter could "save" 100x by using monthly accounts.

    Of course, he doesn't have to, as he works at OpenAI now.

    • MadxX79 7 hours ago ago

      Sounds like a healthy industry, selling tokens at 1000x below cost.

      • wolttam 7 hours ago ago

        API pricing isn’t cost, we don’t know what cost is.

      • impulser_ 6 hours ago ago

        I would bet money Anthropic and OpenAI are actually profitable on inference. The problem is they have to spend large sums of money to train models that are essentially worthless after a few months.

      • SecretDreams 7 hours ago ago

        It's to build a moat, of course!

        Narrator: there was no moat

      • simianwords 5 hours ago ago

        This performative concern over token costs and subsidisation comes from either ignorance or some latent ideology signalling.

        • xantronix 3 hours ago ago

          One could say "that's a great point, we should take more direct ideological action to address this issue!", but expounding upon the finer details would likely get one banned here.

    • peteforde 7 hours ago ago

      What I truly don't understand, as a daily heavy Opus 4.7 user, is how you can coherently prompt 15 different parallel conversations at the same time.

      For me it's not even a "what the hell are you working on" so much as complete inability to understand how you can keep so many different processes working on distinct tasks. It simply doesn't map on to how I use these tools.

      I spend most of my day writing extremely detailed prompts and that's how I'm able to get the sort of excellent results that confound skeptics. But I have to be honest with you: I don't think I can write (or think) fast enough to do two of these at a time, much less 15.

      I definitely could not review what they are generating with any degree of confidence.

      I'm really hoping you can explain what the heck your usage pattern actually looks like, because reading this makes me feel like I'm missing something.

      • thomasahle 7 hours ago ago

        I'm trying to recreate all the commercial EDA stack in open source. (RTL simulators, synthesis, formal proof tools, etc.)

        Building compilers has a _lot_ of parallel tasks agents can work on.

        Wish me luck..

        • narmiouh 6 hours ago ago

          Good luck!

        • IshKebab 5 hours ago ago

          Yeah good luck with that. I find SystemVerilog is probably the thing that AI is worst at, presumably because there's not that much training data out there, and pretty much everything about the commercial tools is paywalled.

      • stikit 5 hours ago ago

        those costs are not just tokens used for prompting . costs include agent loops, etc

    • ianm218 7 hours ago ago

      What do you do with all those accounts?

      • arkadiytehgraet 6 hours ago ago

        Probably trying to fix their broken personal website with the half of the links there not working at all.

  • Terretta 7 hours ago ago

    The mentioned menu bar app is a MITM (man in the middle) and rightly discloses that it gets all your session creds and uses them, along with keychain and full disk access:

    Privacy: Reuses existing provider sessions — OAuth, device flow, API keys, browser cookies, local files — so no passwords are stored.

    macOS permissions: Full Disk Access for Safari cookies, Keychain access for cookie decryption and OAuth flows...

    It's excellent this is disclosed as a reminder of how things work and the tradeoffs you're making to use it.

  • yodakohl 7 hours ago ago

    You can look at the output here https://github.com/steipete Sample commit from 5 minutes ago https://github.com/openclaw/crabbox/pull/113 May 2026: 8,826 commits in 94 repositories

    • maleldil 30 minutes ago ago

      He has tests asserting that the output HTML contains specific JS code as strings? What??

  • faangguyindia 7 hours ago ago

    how many of those tokens were spent to buy fake stars using fake email signups?

  • hansmayer 7 hours ago ago

    What product or feature did he build with it and how much ARR did it generate for OpenAI?

    • fg137 5 hours ago ago

      I mean, it's OpenAI.

      If you look at what happened with Sora, you know none of this matters.

      Just wait till this OpenClaw thing is over.

    • lofaszvanitt 7 hours ago ago

      Marketing

  • 0gs 7 hours ago ago

    you have to admit: he is not as difficult to project paratechnical admiration onto as sama is. maybe the board wants him to be the next ceo

  • malshe 7 hours ago ago

    AI bros love hyping about their insanely inefficient token usage. It's become some sort of a dick-measuring contest. And if you work for OpenAI, of course you can claim insane measurements.

    Just last week I saw a dude boasting about how they used their $20/month ChatGPT subscription to earn $15 (or similar trivial amount) in a bug bounty by running the model the whole day. Sam Altman replied to that tweet but not entirely positively.

    OpenAI has been removing limits on token usage to take on Anthropic but I'm sure most of the users they are acquiring are these AI bros who are burning tokens for the sake of it. Massive price hikes are coming after OpenAI and Anthropic IPOs probably an order of magnitude larger than what happened to ride sharing.

  • ExTv 6 hours ago ago

    thank god im broke lol

    i built my personal app mostly with ollama and it’s been smooth sailing so far. basically openclaw + hermes-style agents running on android phones, and the stuff it can do is kinda insane

  • steve1977 4 hours ago ago

    And what did he create with it?

  • comboy 8 hours ago ago

    worth mentioning that openai hired him some time ago

  • zxornand 5 hours ago ago

    I fear our industry has become a circus, even more than it previously was.

  • Nzen 7 hours ago ago

    tl;dr Peter Steinberger shared a product demo for CodexBar [0] with a graph of OpenAI token usage. This graph shows one million spent, prefers gpt-5.5 and spent twenty thousand today.

    [0] https://github.com/steipete/CodexBar

    However, I do not see a strong reason to believe that this is his actual, personal usage. It could be all openclaw usage or some subset of openai usage, given that he is inside them. I suspect it is far more likely to be fake data [1] that exercises the graph library in a visually satisfying way. Notice that it has no usage for a 'week' after April 15 (a Wednesday), but picks up a bunch later. As marketing copy it needn't have any basis in reality [2]. I should hope openai would put a procedure in front of their entrepreneur acquisition that prevents accidentally exposing trade secrets [3].

    [1] https://github.com/faker-js/faker

    [2] https://www.reddit.com/r/proceduralgeneration/comments/lf2n4...

    [3] https://tvtropes.org/pmwiki/pmwiki.php/Main/PostingWhatYouSh...

    • christoph 7 hours ago ago

      I view this type of post (his, not yours) as meta deception. I only became aware of this type of deception and its power from a bit of reading in to magicians and stage craft in the last few months. There’s a video on YouTube as well that does a great job of breaking down a Derren Brown stunt that uses it to great effect manipulating the TV viewing audience.

      I’d actually seen the original DB episode years before when it first aired and it definitely had an affect on me through this form of manipulation - it altered my internal understanding of marketing/advertising, which was the actual underlying purpose of the episode.

      It’s altered how I internally accept and process information from any 2nd or 3rd hand source. BTW, people aren’t necessarily always aware they’re doing it. We all suffer from our own internal biases and deceptions, and sometimes we spread them unknowingly!

    • elicash 6 hours ago ago

      This reply implies it was his own usage (in fast mode):

      https://x.com/steipete/status/2055428360789016964

  • Philip-J-Fry 7 hours ago ago

    So he's spent $20k in one day. There's not a chance in hell he's actually doing productive work with all these tokens.

    Grifters gonna grift. What a state of affairs.

    • Ekaros 7 hours ago ago

      At this point token spend is in itself the product.

      Hopefully eventually we will go back to evaluating the output. Not that I am very hopeful that we learn to do it in sensible way.

    • malshe 7 hours ago ago

      Come on, he is very productive on twitter /s

  • clearstack 5 hours ago ago

    the real story is upstream — NVDA has >70% gross margins on the chips powering these tokens. cost drops, margins dont.

  • 12arwlaz 6 hours ago ago

    So that is $15.6 million in a year. You could get a decent application for that instead of broken slop if you spend it on human salaries.

    • mekael 15 minutes ago ago

      I’ve worked for companies with less than that in total personnel expenditures (100ish people) and they were actually economically productive / provided a tangible service. With 15 million and a year(ish) I could spin up dedicated teams to build multiple profitable applications. That sounds like hubris, and maybe it is, but i’m pretty confident given my domain knowledge.

      Has everyone gone crazy?

  • boesboes 7 hours ago ago

    He should be brought to the hague XD

    • pixl97 5 hours ago ago

      Why just him, and not every other user on HN that has said things like.

      "Programmers don't need unions or professional standards, it will stand in our way of making as much money as we can, and will slow down the speed of software development".

      With that said, HN does provide the tools to find users that said things like this and if I wasn't lazy I'd love to find at least a few that said things like the above, but are now pearl clutching over AI being bad they are going to crush the things down to singularities.

  • xbar 5 hours ago ago

    100x engineers.

  • wiseowise 7 hours ago ago

    What a clown. And Twitter bozos will cheer and clap. As far as money spent, this is still much better than rounding up and/or bombing brown people, but shows insanity of the current market. The saddest part is that bootlickers/temporarily embarrassed AI millionaires will defend this.

    And of course I'm just yet another envious hater from "the orange website". Your conscience is clear, AI bros. /s

    • vessenes 7 hours ago ago

      OpenClaw is the fastest growth open source project ever. This isn’t clowning.

      • orphea 7 hours ago ago

        Yep, and surely it has nothing to do with buying GitHub stars. Very organic growth.

      • fg137 5 hours ago ago

        Fastest, so?

        You need to try really hard to convince me that OpenClaw is more important or has done more good than React or the 10 projects "below" it.

        As any if this matters.

      • wiseowise 7 hours ago ago

        > OpenClaw is the fastest growth open source project ever.

        By which metrics?

        > This isn’t clowning.

        Why?

        • vessenes 6 hours ago ago

          https://www.getpanto.ai/blog/openclaw-ai-platform-statistics

          Because a solo dev has deployed to millions of people in less than eight months spending I believe zero dollars on marketing.

          We should all be so lucky to clown at this scale.

          • wiseowise 5 hours ago ago

            I'm sorry, where did you get millions of deployments? I see 300k+ Github stars, that have as much worth as a bookmarked page (why don't we count those too?). And 2 mil (alleged) website views, which is also moderately nothing.

            • vessenes 4 hours ago ago

              This site digs in more: https://www.trendingtopics.eu/openclaw-numbers/, and refers to stats from gradually.ai. Stepfun flash alone had 3.4 trillion tokens used for openclaw as of mid April. That’s not counting GLM, Kimi, Claude (which was being used so heavily for this that Anthropic instituted emergency policy changes mid billing cycle), etc. In fact, Hermes, a smaller competitor harness from Mistral (153k stars) was large enough to have a custom ‘kill’ pathway in claude code. (https://github.com/anthropics/claude-code/issues/53262).

              I don’t feel the need to spend all day auditing, and I don’t care very much, but generally I think the combination of Nvidia corporate enthusiasm, available github stats and industry analysis all tells a pretty coherent story: A project with 70k forks on github is likely to have more than, say 700k users. My own fork-to-usage ratio is far less than that.

              Put another way, I would suggest that most public evidence points one direction. If you believe something else, that’s fine. But if you want to convince me there’s less than, say, 100k deployments worldwide, I’d want to understand where those numbers came from before being convinced.

      • throwatdem12311 4 hours ago ago

        Who gives a shit. It’s a trash project.

      • backscratches 7 hours ago ago

        Lol if your only metric is "I say so"

      • boxed 7 hours ago ago

        Both things can be true. The Chinese communist party was one of the biggest social movements ever. Millions died.

        • phpnode 7 hours ago ago

          Goodness me that’s quite a comparison

          • vrganj 6 hours ago ago

            Agreed. The Chinese Communist Party lifted a billion people out of poverty. What good has OpenClaw done?

            • boxed 4 hours ago ago

              Only after ditching communism and embracing markets mind you.

              • vrganj an hour ago ago

                Those two are not mutually exclusive. What they're going now is similar to Lenin's NEP.

      • yygt 6 hours ago ago

        Ah here he is, the bozo who posts with bluster caught out again

  • throwatdem12311 6 hours ago ago

    Good for him…?! Who gives a shit. OpenClaw is garbage.

  • lofaszvanitt 7 hours ago ago

    The OpenClown.