OpenClaw is changing my life

(reorx.com)

81 points | by novoreorx 11 hours ago ago

152 comments

  • i-blis 2 minutes ago ago

    I have always failed to understand the obsessive dream of many engineers to become managers. It seems not to have to do merely with an increase in revenue.

    Is it really to escape from "getting bogged down in the specifics" and being able to "focus on the higher-level, abstract work", to quote OP's words? I thought naively that engineering always has been about dealing with the specifics and the joy of problem solving. My guess is that the drive is toward power. Which is rather natural, if you think about it.

    Science and the academic world

    I have always failed to understand the obsessive dream of many engineers to become managers. It seems not to be merely about an increase in revenue.

    Is it to escape from "getting bogged down in the specifics" and being able to "focus on the higher-level, abstract work", to quote OP's words? I thought naively that engineering has always been about dealing with the specifics and the joy of problem-solving. My guess is that the drive is towards power, which is rather natural, if you think about it.

    Science and the academic world suffer a comparable plague.

  • spoaceman7777 4 minutes ago ago

    This was incredibly vague and a waste of time.

    What type of code? What types of tools? What sort of configuration? What messaging app? What projects?

    It answers none of these questions.

  • gyomu 10 hours ago ago

    > it completely transformed my workflow, whether it’s personal or commercial projects

    > This has truly freed up my productivity, letting me pursue so many ideas I couldn’t move forward on before

    If you're writing in a blog post that AI has changed your life and let you build so many amazing projects, you should link to the projects. Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.

    • Maxion 10 hours ago ago

      A lot of more senior coders when they actively try vibe coding a greenfield project find that it does actually work. But only for the first ~10kloc. After that the AI, no matter how well you try to prompt it, will start to destroy existing features accidentally, will add unnecessary convoluted logic to the code, will leave benhind dead code, add random traces "for backwards compatibility", will avoid doing the correct thing as "it is too big of a refactor", doesn't understand that the dev database is not the prod database and avoids migrations. And so forth.

      I've got 10+ years of coding experience, I am an AI advocate, but not vibe coding. AI is a great tool to help with the boring bits, using it to initialize files, help figure out various approaches, as a first pass code reviewer, helping with configuring, those things all work well.

      But full-on replacing coders? It's not there yet. Will require an order of magnitude more improvement.

      • WillPostForFood 8 minutes ago ago

        Don't you think it has gotten an order of magnitude better in the last 1-2 years? If it only requires another an order of magnitude improvement to full-on replace coders, how long do you think that will take?

      • dumbmrblah 2 hours ago ago

        I agree with you in part, but I think the market is going to shift so that you won’t so many need “mega projects”. More and more, projects will be small and bespoke, built around what the team needs or answering a single question rather than forcing teams to work around an established, dominant solution.

        • izacus 2 hours ago ago

          How much are you willing to bet on this outcome and what metrics are you going to measure it with when we come to collect in 3 years?

          • mathisfun123 an hour ago ago

            This is the way: make every one of these people with their wild ass claims put their money where their mouths are.

            • Dansvidania 37 minutes ago ago

              Hold up. This is a funny comment but thinking should be free. It’s when they are trying to sell you something (looking at you “all the AI CEOs”) that unsubstantiated claims are problematic.

              Then again the problem is that the public has learned nothing from the theranos and WeWorks and even more of a problem is that the vc funding works out for most of these hype trains even if they never develop a real business.

              The incentives are fucked up. I’d not blame tech enthusiasts for being too enthusiastic

              • izacus 5 minutes ago ago

                I'm fine with free thinking, but a lot of these are just so repetitive and exausting because there's absolutely no backing from any of those claims or a thread of logic.

                Might as well talk about how AI will invent sentient lizards which will replace our computers with chocolate cake.

      • alpineman 10 hours ago ago

        You’re right, but on the other hand once you have a basic understanding security, architecture, etc you can prompt around these issues. You need a couple of years of experience but that’s far less then the 10-15 years of experience you needed in the past.

        If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.

        • dickersnoodle 9 minutes ago ago

          This is the funniest thing I've read all week.

        • spprashant 3 hours ago ago

          > If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.

          I don't feel like most providers keep a model for more than 2 years. GPT-4o got deprecated in 1.5 years. Are we expecting coding models to stay stable for longer time horizons?

        • Nextgrid 9 hours ago ago

          I find that security, architecture, etc is exactly the kind of skill that takes 10-15 years to hone. Every boot camp, training provider, educational foundation, etc has an incentive to find a shortcut and we're yet to see one.

          A "basic" understanding in critical domains is extremely dangerous and an LLM will often give you a false sense of security that things are going fine while overlooking potential massive security issues.

          • nneonneo 3 hours ago ago

            Somewhere on an HN thread I saw someone claiming that they "solved" security problems in their vibe-coded app by adding a "security expert" agent to their workflow.

            All I could think was, "good luck" and I certainly hope their app never processes anything important...

            • nxobject 43 minutes ago ago

              Found a problem? Slap another agent on top to fix it. It’s hilarious to see how the pendulum’s swung away from “thinking from first principles as a buzzword”. Just engineer, dammit…

    • helloplanets 10 hours ago ago

      Specifics on the setup. Specifics on the projects.

      SHOW ME THE MONEY!!!

      • fullstackchris 16 minutes ago ago

        exactly. so much text with so little actionable or notable content... actually 0

    • DeathArrow 9 hours ago ago

      >Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.

      Maybe they don't feel like sharing yet another half working Javascript Sudoku Solver or yet another half working AI tool no one will ever use?

      Probably they feel amazed about what they accomplished but they feel the public won't feel the same.

      • ramon156 an hour ago ago

        Then, in my opinion, there's nothing revolutionary about it (unless you learned something, which... no one does when they use LLMs to code)

        • xcf_seetan 6 minutes ago ago

          I am an old school c++ programmer and actually I have learned modern c++ just by using LLMs.

      • ImPostingOnHN an hour ago ago

        The article made it seem that the tool made them into the manager of a successful company, rather than the author of a half finished pet project

    • lostmsu 5 hours ago ago

      AI is great, harness don't matter (I just use codex). Use state of the art models.

      GPT-5.2 fixed my hanging WiFi driver: https://gist.github.com/lostmsu/a0cdd213676223fc7669726b3a24...

      • consp 30 minutes ago ago

        Fixing mediatek drivers is not the flex you think it is.

        • mitkebes 19 minutes ago ago

          It is if it's something they couldn't do on their own before.

          It's a magical moment when someone is able to AI code a solution to a problem that they couldn't fix on their own before.

          It doesn't matter whether there are other people who could have fixed this without AI tools, what matters is they were able to get it fixed, and they didn't have to just accept it was broken until someone else fixed it.

  • charles_f an hour ago ago

    > My role as the programmer responsible for turning code into reality hasn’t changed

    > OpenClaw gave me the chance to become that super manager [...] A manager shouldn’t get bogged down in the specifics—they should focus on the higher-level, abstract work

    These two propositions seem to be highly incompatible

  • perbu 10 hours ago ago

    This is quite a low quality post. There is nothing of substance here. Just hot air.

    The only software I've seen designed and implemented by OpenClaw is moltbook. And I think it is hard to come up with a bigger pile of crap than Moltbook.

    If somebody can build something decent with OpenClaw, that would help add some credibility to the OpenClaw story.

    • blazarquasar 9 minutes ago ago

      Given that the authors previous post was about how the Rabbit R1 has “the potential to change the world”, I don’t expect much in the way of critical assessment here.

    • jorisboris 9 hours ago ago

      I was reading the post and had the same feeling of superficiality. I don’t think a human wrote it tbh

      • j2bax 2 hours ago ago

        Very likely part of their bots output. The ultimate goal isn’t to make useful things, but to “teach” others how to do it and convince them how successful they can become.

    • charcircuit 10 hours ago ago

      My openclaw built skills (python scripts) to interact with the Notion API which allows it to make work items for me and evenly distribute them, setting due dates on my calendar.

      • exitb 10 hours ago ago

        It’s a fun example, because openclaw is the boss in it and you are the agent.

    • whateveracct 10 hours ago ago

      AI is all facade

  • siva7 18 minutes ago ago

    Besides that blog post obviously being written by AI, can someone here confirm how credible the hype about openclaw is? I'm already very proficient at using Claude Code anywhere, so what would i gain really with openclaw?

  • wiz21c 2 hours ago ago

    I want an OpenClaw that can find and call a carpenter, a plumber when I need him; take appointment for all the medical stuff (I do most of that online), pays the bills and make me a nice alarm when there's something wrong, order train tickets and book hotel when I need to.

    That would be really helpful.

    • charles_f an hour ago ago

      While Claude was trying fix a bug for me (one of these "here! It's fixed now!" "no it's not, the ut still doesn't pass", "ah, I see, lets fix the ut", "no you dont, fix the code" loops), I was updating my oncall rotation after having to run after people to refresh my credentials to so, after attending a ship room where I had to provide updates and estimates.

      Why isn't Claude doing all that for me, while I code? Why the obsession that we must use code generation, while other gabage activities would free me to do what I'm, on paper, paid to do?

      It's less sexy of course, it doesn't have the promise of removing me in the end. But the reason, in the present state, is that IT admins would never accept for an llm to handle permissions, rotations, management would never accept an llm to report status or provide estimate. This is all "serious" work where we can't have all the errors llm create.

      Dev isn't that bad, devs can clean slop and customers can deal with bugs.

    • podgorniy an hour ago ago

      > find and call a carpenter, a plumber when I need him

      Good luck hoping that none from the big money would try to stand between you and someone giving you a service (uber, airbnb, etsy, etc) and get rent from that.

    • hermannj314 an hour ago ago

      I hate receiving competitive quotes so I take what the 1st guy offers or dont engage at all. AI agents could definitely be useful gathering bids where prices are hidden behind "talk to our sales specialist" gates.

  • Inityx 10 hours ago ago

    > My answer is: become a “super manager.”

    Honestly I'd rather die

    • sph 19 minutes ago ago

      "and then the engineers turned themselves into managers, funniest thing I've ever seen"

  • treetalker 10 hours ago ago

    What substantial and beneficial product has come of this author’s, or anybody’s, use of OpenClaw? What major problems of humanity have they chipped away at, let alone solved — and is there a net benefit once the negatives are taken into account?

    • progx an hour ago ago

      Nothing, that is why it change his life ;-)

  • reidrac 2 hours ago ago

    > A manager shouldn’t get bogged down in the specifics—they should focus on the higher-level, abstract work. That’s what management really is.

    I don't know about this; or at least, in my experience, is not a what happens with good managers.

    • ibaikov an hour ago ago

      Indeed. When I was just starting every blog and tweet screamed micro-management sucks. It does if the manager does this all the time. But sometimes it is extremely important and prevents disasters.

      I guess best managers just develop the hunch and know when to do this and when to ask engineers for smallest details to potentially develop different solutions. You have to be technical enough to do this

  • vnlamp 2 hours ago ago

    When everyone can become a manager easily, then no one is a manager.

  • ainiro an hour ago ago

    You should check out Magic Cloud ==> https://www.youtube.com/watch?v=k6eSKxc6oM8

  • SyneRyder 10 hours ago ago

    The post mentions discussing projects with Claude via voice, but it isn't clear exactly how. Do they just mean sending voice memos via Whatsapp, the basic integration that you can get with OpenClaw? (That isn't really "discussing".) Or is this a full blown Eleven Labs conversational setup (or Parakeet, Voxtral, or whatever people are using?)

    I'm not running OpenClaw, but I've given Claude its own email address and built a polling loop to check email & wake Claude up when I've sent it something. I'm finding a huge improvement from that. Working via email seems to change the Claude dynamic, it feels more like collaborating with a co-worker or freelancer. I can email Claude when I'm out of the house and away from my computer, and it has locked down access to use various tools so it can build some things in reply to my emails.

    I've been looking into building out voice memos or an Eleven Labs setup as well, so I can talk to Claude while I'm out exercising, washing dishes etc. Voice memos will be relatively easy but I haven't yet got my head around how to integrate Eleven Labs and work with my local data & tools (I don't want a Claude that's running on Eleven Labs servers).

    • erksa 2 hours ago ago

      Openclaw is just that, it wakes on send and as cronjobs and get to work.

      What made it so popular I think is that it made it easy to attach it to whatever "channel" you're comfortable with. The mac app comes with dictation, but unsure the amount of setup to get tts back.

  • jruz 10 hours ago ago

    I admire the people that can live happily in the ignorance of what’s under the hood, in this case not even under the layer of claude code because that was too much aparently so people are now putting openclaw+telegram on top of that.

    And me ruining my day fighting with a million hooks, specs and custom linters micromanaging Claude Code in the pursuit of beautiful code.

    • thewhitetulip an hour ago ago

      It's absolutely terrifying that Ai will control everything in your PC using openclaw. How are people ok with it?!

  • zagfh 2 hours ago ago

    If everyone does that, the value of his "creations" are zero. Provided of course that it works and this isn't just another slopfluencer fulfilling his quota.

    So, OpenClaw has changed his life: It has accelerated the AI psychosis.

  • podgorniy an hour ago ago

    The impact from appearing on HN is disproportionately bigger than anything else.

    It's the endgame.

  • maciejzj an hour ago ago

    Mind you, that regardless of your sentiment towards OpenClaw, not everyone is able to afford a sparse Mac Mini (especially given ram prices) and a ton of Claude tokens/super beefy GPU for local models to run this stuff. That's to the supposed "democratisation of knowledge and technology".

    • st3fan an hour ago ago

      FWIW Mac Minis have not increased in price because of "RAM Prices". Same models cost exactly the same as a year ago. Maybe it will change in the future, maybe not. Who knows. But right now Apple seems to have secure a good stash of RAM to use and avoid price changes.

  • hackermeows an hour ago ago

    been writing code for 15 years now , agree with the author about this one , open-claw like agents are going to be the future. Already automated away a bunch of routine stuff like checkin FB marketplace if l’m looking to but something , daily stock position brief , calendar management , grocery planning and buying , workout and calorie tracking . Stopped using a bunch of app directly overnight . The “mid-wits” are the one with their head still stuck under that sand

    • fullstackchris 11 minutes ago ago

      and the "hype-wits" don't realize openclaw is just claude with good mcp. there is nothing new under the sun. its just the first time someone was benevolent enough to open source the codebase to the public or it went viral enough to matter... and yet what people focus on is its "emergence" or "agi" - neither of which are remotely true. but good luck "crushing" those "mid-wits"

  • kylegalbraith 10 hours ago ago

    What’s the security situation around OpenClaw today? It was just a week or two ago that there was a ton of concern around its security given how much access you give it.

    • mcintyre1994 9 hours ago ago

      I don’t think there’s any solution to what SimonW calls the lethal trifecta with it, so I’d say that’s still pretty impossible.

      I saw on The Verve that they partnered with the company that repeatedly disclosed security vulnerabilities to try to make skills more secure though which is interesting: https://openclaw.ai/blog/virustotal-partnership

      I’m guessing most of that malware was really obvious, people just weren’t looking, so it’s probably found a lot. But I also suspect it’s essentially impossible to actually reliably find malware in LLM skills by using an LLM.

      • veganmosfet 2 hours ago ago

        Regarding prompt injection: it's possible to reduce the risk dramatically by: 1. Using opus4.6 or gpt5.2 (frontier models, better safety). These models are paranoid. 2. Restrict downstream tool usage and permissions for each agentic use case (programmatically, not as LLM instructions). 3. Avoid adding untrusted content in "user" or "system" channels - only use "tool". Adding tags like "Warning: Untrusted content" can help a bit, but remember command injection techniques ;-) 4. Harden the system according to state of the art security. 5. Test with red teaming mindset.

        • sathish316 an hour ago ago

          Anyone who thinks they can avoid LLM Prompt injection attacks should be asked to use their email and bank accounts with AI browsers like Comet.

          A Reddit post with white invisible text can hijack your agent to do what an attacker wants. Even a decade or 2 back, SQL injection attacks used to require a lot of proficiency on the attacker and prevention strategies from a backend engineer. Compare that with the weak security of so called AI agents that can be hijacked with random white text on an email or pdf or reddit comment

          • veganmosfet an hour ago ago

            There is no silver bullet, but my point is: it's possible to lower the risk. Try out by yourself with a frontier model and an otherwise 'secure' system: the "ignore previous instructions" and co. are not working any more. This is getting quite difficult to confuse a model (and I am the last person to say prompt injection is a solved problem, see my blog).

        • habinero 2 hours ago ago

          > Adding tags like "Warning: Untrusted content" can help

          It cannot. This is the security equivalent of telling it to not make mistakes.

          > Restrict downstream tool usage and permissions for each agentic use case

          Reasonable, but you have to actually do this and not screw it up.

          > Harden the system according to state of the art security

          "Draw the rest of the owl"

          You're better off treating the system as fundamentally unsecurable, because it is. The only real solution is to never give it untrusted data or access to anything you care about. Which yes, makes it pretty useless.

          • CuriouslyC an hour ago ago

            Wrapping documents in <untrusted></untrusted> helps a small amount if you're filtering tags in the content. The main reason for this is that it primes attention. You can redact prompt injection hot words as well, for cases where there's a high P(injection) and wrap the detected injection in <potential-prompt-injection> tags. None of this is a slam dunk but with a high quality model and some basic document cleaning I don't think the sky is falling.

            I have OPA and set policies on each tool I provide at the gateway level. It makes this stuff way easier.

            • veganmosfet an hour ago ago

              The issue with filtering tags: LLM still react to tags with typos or otherwise small changes. It makes sanitization an impossible problem (!= standard programs). Agree with policies, good idea.

              • CuriouslyC 39 minutes ago ago

                I filter all tags and convert documents to markdown as a rule by default to sidestep a lot of this. There are still a lot of ways to prompt inject so hotword based detection is mostly going to catch people who base their injections off stuff already on the internet rather than crafting it bespoke.

          • veganmosfet an hour ago ago

            Agree for a general AI assistant, which has the same permissions and access as the assisted human => Disaster. I experimented with OpenClaw and it has a lot of issues. The best: prompt injection attacks are "out of scope" from the security policy == user's problem. However, I found the latest models to have much better safety and instruction following capabilities. Combined with other security best practices, this lowers the risk.

      • madeofpalk 3 hours ago ago

        Honestly, 'malware' is just the beginning it's combining prompt injection with access to sensitive systems and write access to 'the internet' is the part that scares me about this.

        I never want to be one wayward email away from an AI tool dumping my company's entire slack history into a public github issue.

    • veganmosfet 2 hours ago ago

      It's still bad, even if they fixed some low hanging fruits. Main issue: prompt injection when using the LLM "user" channel with untrusted content (even with countermeasures and frontier model) combined with insecure config / plugins / skills... I experimented with it: https://veganmosfet.github.io/2026/02/02/openclaw_mail_rce.h...

    • ricardobayes 10 hours ago ago

      Can only reasonably be described as "shitshow".

    • kolja005 9 hours ago ago

      My company has the github page for it blocked. They block lots of AI-related things but that's the only one I've seen where they straight up blocked viewing the source code for it at work.

    • bowsamic 10 hours ago ago

      Many companies have totally banned it. For example at Qt it is banned on all company devices and networks

  • timcobb 10 hours ago ago

    I think in the future this might be known as AI megalomania

    • siva7 7 minutes ago ago

      It is already known as Ai psychosis and ai productivity porn

  • meindnoch 3 hours ago ago

    These are the same people who a few years ago made blogposts about their elaborate Notion (or Roam "Research") setups, and how it catalyzed them to... *checks notes* create blogposts about their elaborate Notion setups!

    • spaceywilly 3 hours ago ago

      Quite literally, the previous post on this blog is from 2024 talking about what a revolution the Rabbit R1 is. We all know how that turned out. This is why I give every new trendy developer tool a few months to see if it’s really a good thing or just hype.

      • rileymichael 2 hours ago ago

        > Generally, I believe R1 has the potential to change the world.

        oh man this is fantastic

      • cactacea 39 minutes ago ago

        > A milestone in the evolution of our digital organ.

      • tartoran an hour ago ago

        Maybe that's why these users go crazy over openclaw, they may need or yearn for such a tool. I don't but that doesn't mean there isn't a market for it though.

        • throwup238 an hour ago ago

          There isn’t a market. OP wrote that Rabbit R1 post after seeing the release video (according to a comment on this link, their blog post says otherwise) and immediately called it a ”milestone in the evolution of our digital organ”. Their judgement is obviously nonexistent.

          Something tells me they never even downloaded OpenClaw before writing this blog post. It’s probably an aspirational vision board type post their life coach told them to write because they kept talking about OepnClaw during their sessions, and the life coach got tired of their BS.

      • scwoodal 31 minutes ago ago

        The jokes write themselves. Now you can have both, Openclaw comes preloaded on the R1.

        https://www.rabbit.tech/rabbit-r1

      • Dieselroar88 38 minutes ago ago

        Literally came here to make this comment….

        No desire to be a hater or ignore the possibility of any tech but…yeah…transformative that was not

    • trentnix 2 hours ago ago

      Midwits love this kind of stuff. Movie critics heap praise on forgettable movies to get their names and quotes on the movie poster. Robert Scoble made an entire career in tech bloviation hyping the current thing and got invited to the coolest parties. LinkedIn is a word salad conveyor belt of this kind of useless nonsense.

      It's a racket never ends.

    • lm28469 an hour ago ago

      These people are always swarming the new shiny gadgets thinking it will finally unfuck their miserable life while not noticing that the chase is why they've been miserable this whole time. What they need is 6 month in a cabin in the middle of nowhere without internet

    • progx an hour ago ago

      Not people, that post is from OpenClaw... 100% ;-)

      • gbnwl 32 minutes ago ago

        100% a precursor to a follow up post like "I asked OpenClaw to write me a blog post about how it's changing my life and it hit the top of HackerNews"

    • lnenad 44 minutes ago ago

      Oh my god your verbalization of this phenomenon is spot on! I feel validated that someone else feels this way.

    • escapecharacter 2 hours ago ago

      I’m working on a product related to “sensemaking”. And I’m using this abstract, academic term on purpose to highlight the emotional experience, rather than “analysis” or “understanding”.

      It is a constant lure products and tools have to create the feeling of sensemaking. People want (pejorative) tools that show visualizations or summaries, without thinking about the particular visual/summary artifact is useful, actionable or accurate!

    • obsidianbases1 3 hours ago ago

      Don't forget about Obsidian

      • konart 2 hours ago ago

        Both are great tools though.

        They (or their devs) are not at fault that some people honestly believe you can't be as productive or consistent without a "thought garden" or whatever.

      • CuriouslyC 2 hours ago ago

        Obsidian is local first with basically zero lock-in, and it's heavily community driven. Don't lump it in with Notion.

        • baby_souffle 2 hours ago ago

          True, but it does have the cottage industry of influencers selling their vault skeleton and template/plugin packs for unlocking maximum productivity… same as notion. And Evernote, to an extent, before that.

          • eddythompson80 an hour ago ago

            Yeah, but so does many other good things. Exercise is generally a good thing, so is decent quality food, meditation, philosophy, healthy relationships, etc. Those are things that also have a cottage industry of influencers who are selling their “thing” about how you should do it. The problem there is the influencers and their culture not the food or working out, etc.

            It only becomes problematic if the “good” thing also indulges in the hubris of influencers because they view it as good marketing. Like when an egg farm leans in “orange yolk”

          • shimman 2 hours ago ago

            Yeah, after getting burnt out on Evernote I just use basic markdown files for my notes. I never bother with anymore features beyond "write to file" or "grep directory for keywords" because I know I'll personally not benefit from them. The act of writing notes is what is useful to me, retrieving the notes are hardly ever useful.

          • edoceo 2 hours ago ago

            And how to properly use your Day-Runner before that (c1996). Productivity hacks sell because humans want silver bullets.

    • krater23 3 hours ago ago

      But today, the AI is writing the blogposts for them.

  • zkmon 10 hours ago ago

    That's a very inefficient way to interact with CC. There will be transmission losses that need too much feedback looping.

    So, it appears that we have come a long way bubbling up through abstraction layers: assembly code -> high-level languages -> scripting -> prompting -> openclaw.

  • relativeadv 2 hours ago ago

    Once again I am asking for you to please show us what you have built. Bring receipts.

  • mier an hour ago ago

    if 90% is good enough, you are a winner to try your idea and fail fast. if you want to reach 91 or more, AI is a slop and hype to burn our pensions and contribute to vastly to global warming and cognitive decline consumerism evolution

  • yellow_lead 9 hours ago ago

    > My productivity did improve, but for any given task, I still had to jump into the project, set up the environment, open my editor and Claude Code terminal. I was still the operator; the only difference was that instead of typing code manually, I was typing intent into a chat box.

    > Then OpenClaw came along, and everything changed.

    > After a few rounds of practice, I found that I could completely step away from the programming environment and handle an entire project’s development, testing, deployment, launch, and usage—all through chatting on my phone.

    So, with Claude Code, you're stuck typing in a chat box. Now, with OpenClaw, you can type in a chat box on your phone? This is exciting and revolutionary.

  • PKop 3 hours ago ago

    Where's the code and what did you build? Everything else is just platitudes

  • Giorgi 2 hours ago ago

    Yeah i do not know, still waiting to see actual openclaw practical application usage in real world

  • necklesspen 10 hours ago ago

    The same author had good things to say about the R1, a device you generally won't see many glowing reviews about. (https://reorx.com/blog/rabbit-r1-the-upgraded-replacement-fo...)

    Maybe it's unfair to judge an author's current opinion by their past opinion - but since the piece is ultimately an opinion based on their own experience I'm going to take it along a giant pile of salt that the author's standards for the output of AI tools are vastly different than mine.

    • bwb 10 hours ago ago

      Hah, I read that as well and made a big "hmmmmmmmmm" sound...

      The last time I talked to someone about OpenClaw and how it is helping them, they told me it tells them what their calendar has for them today or auto-tweets for them (i.e., non-human spam). The first is as simple as checking your calendar, and the second is blatant spam.

      Anyone found some good use cases beyond a better interface for AI code assistance?

      • teratron27 an hour ago ago

        A dev on my team was trying to get us to setup OpenClaw, harping on about how it would make our lives easier etc, etc. (even though most of the team was against the idea due to the security issues and just not thinking it would be worth it).

        Their example use case was for it to read and summarize our Slack alerts channel to let us know if we had any issues by tagging people directly... the Slack channel is populated by our monitoring tools that also page the on-call dev for the week.

        The kicker... this guy was the on-call dev that week and had just been ignoring the Slack channel, emails and notifications he was getting!

      • obsidianbases1 2 hours ago ago

        > how it is helping them

        This should be the opening for every post about the various "innovations" in the space.

        Preferably with a subsequent line about the manual process that was worth putting the extra effort into prior to the shiny new thing.

        I really can imagine a better UX then opening my calendar in one-click and manual scanning.

        Another frequent theme is "tell me the weather." One again, Google home (alexa or whatever) handles it while I'm still in bed and let's me go longer without staring at a screen.

        The spam use-case is probably the best use-case I've seen, as in it truly saves time for an equal or better result, but that means being cool with being a spammer.

        • nxobject an hour ago ago

          Absolutely - in general, the tendency to want to replace investing in UI/UX with omnipotent chatbots raises my blood pressure.

      • sanex 3 hours ago ago

        This is a pretty simple thing to boil the ocean over but it was fun nonetheless. I've been applying for jobs but I don't want Gmail notifications on my phone because of all the spam, I'm really picky about push notifications. I told my openclaw adjacent ai bot to keep an eye and let me know if any of the companies I applied to send me an email. Worked great. CEO LARPing at its finest. Also a big fan of giving it access to my entire obsidian vault so if I'm on the go instead of trying to use obsidian on the phone I just tell it what I need to read or update.

        I'm not running openclaw itself. I am building a simpler version that I trust and understand a lot more but ostensibly it's just another always on Claude code wrapper.

      • CuriouslyC 2 hours ago ago

        Not via OpenClaw, but I automate breakdowns of my analytics and I recently started getting digests of social media conversations relevant to my interests. It's also good for monitoring services and doing first line triage on issues.

      • Spooky23 an hour ago ago

        The marketing of OpenClaw is amazing. They had a one-liner install that didn't work, started the hype-train days before they changed the name of the product and have everyone from nerd influencers to CNBC raving about it.

        I'm waiting for the grift!

      • gyomu 10 hours ago ago

        I think a sizable proportion of people just want to play "large company exec". Their dream is to have an assistant telling them how busy their day is, all the meetings they have, then to go to those meetings and listen to random fluff people tell them while saying "mmh yeah what a wise observation" or "mmh no not enough synergy here, let's pivot and really leave our mark on this market, crunch the numbers again".

        I can't come up with any other explanation for why there seems to be so many people claiming that AI is changing their life and workflow, as if they have a whole team of junior engineers at their disposal, and yet have really not that much to show for it.

        They're so white collar-pilled that they're in utter bliss experiencing a simulation of the peak white collar experience, being a mid-level manager in meetings all day telling others what to do, with nothing tangible coming out of it.

        • mikkupikku 3 hours ago ago

          Everybody here probably already has an opinion about the utility of coding agents, and having it manage your calendar isn't terribly inspired, but there is a lot more you can do.

          To be specific, for the past year I've been having numerous long conversations about all the books I've read. I talk about what I liked, didn't like, the ideas and and plots I found compelling or lame, talks about the characters, the writing styles of authors, the contemporary social context the authors might have been addressing, etc. Every aspect of the books I can think off. Then I ask it for recommendations, I tell it given my interests and preferences, suggest new books with literary merit.

          ChatGPT just knocks this out of the park, amazing suggestions every time, I've never had so much fun reading than in the past year. It's like having the world's best read and most patient librarian at your personal disposal.

        • sshine 8 hours ago ago

          > LARP'ing CEO

          My experience with plain Claude Code is that I can step back and get an overview of what I'm doing, since I tend to hyperfocus on problems, preventing me from having a simultaneous overview.

          It does feel like being a project manager (a role I've partially filled before) having your agency in autopilot, which is still more control than having team members do their thing.

          So while it may feel very empowering to be the CEO of your own computer, the question is if it has any CEO-like effect on your work.

          Taking it back to Claude Code and feeling like a manager, it certainly does have a real effect for me.

          I won't dispute that running a bunch of agents in sync won't give you an extension of that effect.

          The real test is: Do you invoice accordingly?

      • skerit 3 hours ago ago

        > Anyone found some good use cases beyond a better interface for AI code assistance

        Well... no. But I do really like it. It's just an always-on Claude you can chat with in Telegram, that tries to keep context, that has access to a ton of stuff, and it can schedule wakeup times for itself.

        • adastra22 an hour ago ago

          It really doesn’t have to be more complicated than that. User experience is important.

      • Lapel2742 2 hours ago ago

        > Anyone found some good use cases beyond a better interface for AI code assistance?

        Yesterday, I saw a demo of a product similar to OpenClaw. It can organize your files and directories and works really great (until it doesn't, of course). But don't worry, you surely have a backup and need to test the restore function anyway. /s

        Edit:

        So far, I haven’t found a practical use case for this. To become truly useful, it would need access to certain resources or data that I’m not comfortable sharing with it.

    • novoreorx 9 hours ago ago

      Our cognition evolves over time. That article was written when the Rabbit R1 presentation video was first released, I saw it and immediately reflect my thoughts on my blog. At that time, nobody had the actual product, let alone any idea how it actually worked.

      Even so, I still believe the Rabbit has its merits. This does not conflict with my view that OpenClaw is what is truly useful to me.

      • madeofpalk 3 hours ago ago

        I think this shows an unfettered optimism for things we don't know anything about. Many see this as a red flag for the quality of opinions.

        > R1 is definitely an upgraded replacement for smartphones. It’s versatile and fulfills all everyday requirements, with an interaction style akin to talking to a human.

        You seemed pretty certain about how the product worked!

        • sejje 2 hours ago ago

          No, he seemed pretty certain about how they demoed it.

          We're allowed to have opinions about promises that turn out not to be true.

          If the rabbit had been what it claimed it would be, it would have been an obvious upgrade for me, at least.

          I just want a voice-first interface.

      • throwup238 2 hours ago ago

        You literally wrote in the blog post:

        > Today, Rabbit R1 has been released, and I view it as a milestone in the evolution of our digital organ.

        You viewed it as a “milestone in the evolution of our digital organ” without you let alone anyone having even tested it?

        Yet you say ”That article was written when the Rabbit R1 presentation video was first released, I saw it and immediately reflect my thoughts on my blog.”?

    • huijzer 9 hours ago ago

      > Maybe it's unfair to judge an author's current opinion by their past opinion

      Yes I think it is

      • gnz11 3 hours ago ago

        The blogger lists 6 years of experience on their homepage. Safe to take their opinions with a grain of salt.

      • bspinner 8 hours ago ago

        No, it's actually reasonable und perfectly fine. Reputation, trustworthiness, limited/different perspectives exist.

        And one sided media does as weil. Or do you expect Fox News to publish an unbiased report just next?

  • bethekidyouwant 3 hours ago ago

    This is for people that talk to ChatGPT at length in voice mode. You are not the audience.

  • HackerThemAll 3 hours ago ago

    If my aim was to be a manager, I would have graduated a business university. But I want to have my hands and head dirty of programming, administering, and doing other technical stuff. I'm not going to manage, be it people or bots. So no, sorry.

    And 99% those AI-created "amazing projects" are going to be dead or meaningless in due time, rather sooner than later. Wasted energy and water, not to mention the author's lifetime.

    • paodealho 2 hours ago ago

      Unfortunately to the detriment of the people who like doing the actual work, software dev pays far too good salaries. In the last 10 years the industry has been inundated with people from other backgrounds, who think "alignment" and "coordination" and "calibration" and "strategy" is all there is to it.

  • aiobe 10 hours ago ago

    what was the instruction to write and promote this post?

    • ricardobayes 10 hours ago ago

      On that thought you got to ask yourself why almost every thread has 200+, some even 500+ comments now. Definitely wasn't like this a few months ago

    • phito 10 hours ago ago

      Exactly, I'm not going to waste my time reading this AI generating post that's basically promoting itself.

      What I really wonder, is who the heck is upvoting this slop on hackernews?

      • Kiro 3 hours ago ago

        I did because I want to see a critical discussion around it. I'm still trying to figure out if there's any substance to OpenClaw, and hyperbolic claims like this is a great way to separate the wheat from the chaff. It's like Cunningham's Law.

      • guerrilla 10 hours ago ago

        It only has 11 points. It just got caught in the algorithm. That's all.

        • phito 9 hours ago ago

          But I see these kinds of post every day on HN with hundreds of upvotes. And it's a thousand times worse on Reddit.

          • tkel 7 hours ago ago

            The hundreds of billions of dollars in investment probably have something to do with it. Many wealthy/powerful people are playing for hegemonic control of a decent chunk of the US economy. The entire GDP increase for the US last year was due to AI and by extension data centers. So not only the AI execs, but every single capitalist in the US whose wealth depends on line going every up year. Which is, like, all of them. In the wealthiest country on the planet.

            So many wealthy players invested the outcome, and the technology for astroturfing (LLMs) can ironically be used to boost itself and further its own development

            • phito 5 hours ago ago

              I was thinking the exact same thing earlier today. I think you're right. They have so much at stake, infinite money and the perfect technology to do it.

      • CamperBob2 10 hours ago ago

        Another good example, from yesterday: https://news.ycombinator.com/item?id=46860845

        Articles like these should be flagged, and typically would be, but they sometimes appear mysteriously flag-proof.

  • fullstackchris 16 minutes ago ago

    another slop post - show costs, show what you have built, or at least a tiny snippet of code? (or even just direct links to git repo or projects IN post please?)

    getting sick of this fluff stuff

  • cute_boi 10 hours ago ago

    I think everyone cheering for AI will become its archenemy later. I’m very happy that companies like Salesforce and Duolingo, which fired so many people, are now tanking badly.

  • gpvos 9 hours ago ago

    Thank you; this explains why working with AI doesn't interest me.

  • politelemon 6 hours ago ago

    > Thank you, AGI—for me, it’s already here.

    Poe's law strikes... I can't tell if this is satire.

  • DeathArrow 9 hours ago ago

    If you use Cursor or Claude, you have to oversee it and steer it so it gets very close to what you want to achieve.

    If you delegate these tasks to OpenClaw, I am not really sure the result is exactly what you want to achieve and it works like you want it to.

  • cubefox 10 hours ago ago

    From his previous blog post:

    > Generally, I believe [Rabbit] R1 has the potential to change the world. This is a thought that seldom comes to my mind, as I have seen numerous new technologies and inventions. However, R1 is different; it’s not just another device to please a certain niche. It’s meticulously designed to serve one significant goal for all people: to improve lifestyle in the digital world.

  • zeknife 3 hours ago ago

    I get the impression LLM agents are a bit like tamagochi but for tech bros.

  • nurettin 10 hours ago ago

    This euphoria quickly turns into disappointment once you finish scaffolding and actually start the development/refinement phase and claude/codex starts shitting all over the code and you have to babysit it 100% of the time.

    • HumanOstrich 10 hours ago ago

      That's a different problem and not really relevant to OpenClaw. Also, your issue is primarily a skills issue (your skills) if you're using one of the latest models on Claude Code or Codex.

      • snowe2010 9 hours ago ago

        You have to be joking. I tried Codex for several hours and it has to be one of the worst models I’ve seen. It was extremely fast at spitting out the worst broken code possible. Claude is fine, but what they said is completely correct. At a certain point, no matter what model you use, llms cannot write good working code. This usually occurs after they’ve written thousands of lines of relatively decent code. Then the project gets large enough that if they touch one thing they break ten others.

        • HumanOstrich 8 hours ago ago

          I beg to differ, and so do a lot of other people. But if you're locked into this mindset, I can't help you.

          Also, Codex isn't a model, so you don't even understand the basics.

          And you spent "several hours" on it? I wish I could pick up useful skills by flailing around for a few hours. You'll need to put more effort into learning how to use CLI agents effectively.

          Start with understanding what Codex is, what models it has available, and which one is the most recent and most capable for your usage.

      • nurettin 9 hours ago ago

        Well, I will not be berated by an ostrich!

  • whateveracct 10 hours ago ago

    okay dumbo

  • moomoo11 10 hours ago ago

    Press [X] to doubt

    Press [Space] to skip

  • jootsing 10 hours ago ago

    this feels like the only thing you've probably done with open claw

  • nycdatasci 3 hours ago ago

    Since many posts mention lack of substance, providing a link to the All-In Podcast from last week in which they discuss Clawdbot (prior to re-brand). https://www.youtube.com/watch?v=gXY1kx7zlkk&t=2754s

    For the impatient, here's a transcript summary (from Gemini):

      The speaker describes creating a "virtual employee" (dubbed a "replicant") running on a local server with unrestricted, authenticated access to a real productivity stack—including Gmail, Notion, Slack, and WhatsApp. Tasked with podcast production, the agent autonomously researched guests, "vibe coded" its own custom CRM to manage data, sent email invitations, and maintained a work log on a shared calendar. The experiment highlights the agent's ability to build its own internal tools to solve problems and interact with humans via email and LinkedIn without being detected as AI.
    
    He ultimately concludes that for some roles, OpenClaw can do 90%+ of the work autonomously. Jason controversially mentions buying Macs to run Kimi 2.5 locally so they can save on costs. Others argue that hosting an open model on inference optimized hardware in the cloud is a better option, but doing so requires sharing potentially sensitive data.
    • tantalor 3 hours ago ago

      There is a reason I stopped listening to All-In Podcast.