I agree that Brooks's Law applies here, but I think it bites at a different level than suggested.
An engineer coordinating AI agents can achieve coherent architecture. The bottleneck is less about human-AI coordination; it's that the inert organizational structures won't adapt.
The engineer now has to coordinate with AI agents and all the legacy coordinating roles that were designed for a different era. All these roles still demand their slice of attention, except now there are more coordination points, not fewer - AI agents themselves, new AI governance roles, AI policy committees, compliance officers, security assessments...
Brooks’s law was about humans, but the spirit still applies: the real bottleneck is how much of the system a human can keep in their head at once. Most agent setups I’ve seen don’t reduce that load, they increase it: more surface area, more glue code, more boilerplate that nobody fully understands.
The only places they’ve worked well for me are roles where you can afford to throw away 80% of the output (migrations, test stubs, scaffolding) and keep a tight human-owned core. Treat agents like interns you fire every night, not teammates you trust with the architecture.
This exactly matches my experience. I also suspected that it was my higher threshold for code quality but Ai generated code is just not worth adding to a project without very strict reviews, unless it's non production and I want to fully give the project over to Ai
You can keep burning tokens until it complies. Thats what I do and I get good results. I do often have to spend a day just thinking through the prompt, but then again coding rarely was the bottleneck. But AI is very good at doing a refactor as well, tedious stuff like constructor juggling. Thing is code is to be written for humans first, no matter if the author is human or AI.
I agree that Brooks's Law applies here, but I think it bites at a different level than suggested.
An engineer coordinating AI agents can achieve coherent architecture. The bottleneck is less about human-AI coordination; it's that the inert organizational structures won't adapt.
The engineer now has to coordinate with AI agents and all the legacy coordinating roles that were designed for a different era. All these roles still demand their slice of attention, except now there are more coordination points, not fewer - AI agents themselves, new AI governance roles, AI policy committees, compliance officers, security assessments...
Brooks’s law was about humans, but the spirit still applies: the real bottleneck is how much of the system a human can keep in their head at once. Most agent setups I’ve seen don’t reduce that load, they increase it: more surface area, more glue code, more boilerplate that nobody fully understands.
The only places they’ve worked well for me are roles where you can afford to throw away 80% of the output (migrations, test stubs, scaffolding) and keep a tight human-owned core. Treat agents like interns you fire every night, not teammates you trust with the architecture.
This exactly matches my experience. I also suspected that it was my higher threshold for code quality but Ai generated code is just not worth adding to a project without very strict reviews, unless it's non production and I want to fully give the project over to Ai
You can keep burning tokens until it complies. Thats what I do and I get good results. I do often have to spend a day just thinking through the prompt, but then again coding rarely was the bottleneck. But AI is very good at doing a refactor as well, tedious stuff like constructor juggling. Thing is code is to be written for humans first, no matter if the author is human or AI.