"We built it, it worked sometimes, so we threw it all out" is either the most honest engineering blog post I've read in a while or the plot summary of every project I've ever worked on.
It feels like the teams getting AI workflows right are the ones willing to iterate toward simplicity, figuring out what they're uniquely good at and letting the rest of the ecosystem handle the rest.
That loop of build, learn, simplify is quietly producing better products than "build the whole thing" ever did.
AI tooling is moving so fast that the only way to stay afloat is to go as generic and pluggable as possible. That's the case for coding agents whether they're writing code or fixing tests.
"We built it, it worked sometimes, so we threw it all out" is either the most honest engineering blog post I've read in a while or the plot summary of every project I've ever worked on.
It feels like the teams getting AI workflows right are the ones willing to iterate toward simplicity, figuring out what they're uniquely good at and letting the rest of the ecosystem handle the rest.
That loop of build, learn, simplify is quietly producing better products than "build the whole thing" ever did.
Tangential but reminds me of the backlash Clair Obscur got after they had AI assets in their early development. https://www.polygon.com/clair-obscur-expedition-33-indie-gam...
Feels like a nobrainer at this point to "build fast with AI, prove its use, then tear parts of it down and make it good"
AI tooling is moving so fast that the only way to stay afloat is to go as generic and pluggable as possible. That's the case for coding agents whether they're writing code or fixing tests.