LLMs are like lane assist for cars. They help with common scenarios, but you still need to watch for edge cases and are ultimately responsible for the vehicle.
The difference is companies expect massive productivity boosts from AI tooling. Its as if when lane assist was invented we also doubled all the speed limits, and now everyone is stressed trying to make sure the car always stays in the lanes.
The BCG framing makes it sound like a cognitive load problem but I think it is more unreliability fatigue. When your AI does 8 things right and then confidently does the 9th wrong, you spend mental energy second-guessing everything. Supervising an unreliable system is more exhausting than just doing the task yourself.
LLMs are like lane assist for cars. They help with common scenarios, but you still need to watch for edge cases and are ultimately responsible for the vehicle.
The difference is companies expect massive productivity boosts from AI tooling. Its as if when lane assist was invented we also doubled all the speed limits, and now everyone is stressed trying to make sure the car always stays in the lanes.
The BCG framing makes it sound like a cognitive load problem but I think it is more unreliability fatigue. When your AI does 8 things right and then confidently does the 9th wrong, you spend mental energy second-guessing everything. Supervising an unreliable system is more exhausting than just doing the task yourself.
Yep the bottleneck is now review and QA
I think you missed the point of the article.
I worked at a place that did pair programming. It was more intense than programming individually. Doing it for hours could be draining.
AI programming is going to be worse, because the AI comes back faster than a human would for the same work.