This is an idea that my dad and I, who have set up dueling OpenClaw agents, have been tossing around a lot lately. It's been so much fun messing around with OpenClaw, but at the end of the day I'm writing this post because I don't have anything better to do with all of that excitement.
> These are not hypothetical questions anymore. They're engineering problems.
> Not the human kind. The infrastructure kind
> Not better models. Not smarter agents. Sandboxed accounts. Scoped permissions.
To be frank the idea was not bad (unlike pure AI slop). It gave me the feeling that somebody wrote some summary of points, some of which have some poignancy, and let the AI to make it a post - but that brings with it the slopification.
Thanks! And yeah, that's pretty much what I did. I liked the additions it made of the research (some of which I wasn't aware of) and analogies, in case ideas which were intuitive to others weren't intuitive to me. But I'm still trying to fine-tune the editing workflow to de-slopify its output completely.
This is an idea that my dad and I, who have set up dueling OpenClaw agents, have been tossing around a lot lately. It's been so much fun messing around with OpenClaw, but at the end of the day I'm writing this post because I don't have anything better to do with all of that excitement.
:sweatingemoji:
I'm still working on it. I'll yeet out those sentences.
But I'm more interested in conveying the idea...
To be frank the idea was not bad (unlike pure AI slop). It gave me the feeling that somebody wrote some summary of points, some of which have some poignancy, and let the AI to make it a post - but that brings with it the slopification.
Thanks! And yeah, that's pretty much what I did. I liked the additions it made of the research (some of which I wasn't aware of) and analogies, in case ideas which were intuitive to others weren't intuitive to me. But I'm still trying to fine-tune the editing workflow to de-slopify its output completely.