I’ve been experimenting quite a bit with AI-assisted development recently (Copilot, Cursor, Claude, etc.), both in larger systems and in smaller side projects.
What keeps surprising me is not hallucinations or model output quality as such, but how easy it is to lose shared architectural context over time.
At first everything feels great. Things move fast. Demos work. Features pile up.
But after a while I notice that parts of the system were built differently than I would have designed them myself. not necessarily wrong, just inconsistent in ways that are hard to see early.
Nothing breaks right away.
But at some point I realize I wouldn’t feel comfortable taking full responsibility
for the system anymore, especially under (long-term) production constraints..
What I’m struggling with is this:
How do you keep architectural intent explicit when AI is writing a lot of the code?
For me it seems less like a prompt or a context window problem and more like a governance issue: Which decisions must stay stable? What must not change? Where is AI allowed to explore, and where should it be constrained?
I’ve started experimenting with more explicit roles, workflows, and step-by-step development phases, but honestly I’m not convinced yet this is the right balance.
I’m curious how others handle this in practice:
How do you get real speed from AI tools without slowly drifting into a system you no longer fully understand or trust?
> …How do you keep architectural
> intent explicit…
> …
> …without slowly drifting into a
> system you no longer fully understand…?
One way some of the more mature shops I've worked at have solved the understandability problem (for human devs) is with visual modeling.
Visual modeling doesn't necessarily mean by-the-book UML, of course. Even junior devs understand simple misshapen boxes, lines and arrows on a white board.
You'd be amazed at how effective even imperfectly-drawn visual models are at keeping meatbags' memories on track with how the system hangs together.
In a brick and mortar dev shop, ideally you want a large whiteboard that's positioned somewhere that every team member can glance at conveniently at any time.
I know it works wonders for me, anyway. In fact, just yesterday I added a PlantUML-backed visual model generating skill to the coding CLI I've been using lately. They get output to a uml folder at the root of my projects.
> …I’m curious how others handle this in practice…
I'm banking on good old fashioned UML defined as machine-readable text files as one way to remind coding agents of architectural intent in my AI coding-assisted projects.
I agree. The difference for me is speed and feedback loops though: with AI, architectural drift can accumulate much faster and stay invisible longer.
That’s what makes governance feel harder here.
I’ve been experimenting quite a bit with AI-assisted development recently (Copilot, Cursor, Claude, etc.), both in larger systems and in smaller side projects.
What keeps surprising me is not hallucinations or model output quality as such, but how easy it is to lose shared architectural context over time.
At first everything feels great. Things move fast. Demos work. Features pile up.
But after a while I notice that parts of the system were built differently than I would have designed them myself. not necessarily wrong, just inconsistent in ways that are hard to see early.
Nothing breaks right away. But at some point I realize I wouldn’t feel comfortable taking full responsibility for the system anymore, especially under (long-term) production constraints..
What I’m struggling with is this: How do you keep architectural intent explicit when AI is writing a lot of the code?
For me it seems less like a prompt or a context window problem and more like a governance issue: Which decisions must stay stable? What must not change? Where is AI allowed to explore, and where should it be constrained?
I’ve started experimenting with more explicit roles, workflows, and step-by-step development phases, but honestly I’m not convinced yet this is the right balance.
I’m curious how others handle this in practice: How do you get real speed from AI tools without slowly drifting into a system you no longer fully understand or trust?
Visual modeling doesn't necessarily mean by-the-book UML, of course. Even junior devs understand simple misshapen boxes, lines and arrows on a white board.
You'd be amazed at how effective even imperfectly-drawn visual models are at keeping meatbags' memories on track with how the system hangs together.
In a brick and mortar dev shop, ideally you want a large whiteboard that's positioned somewhere that every team member can glance at conveniently at any time.
I know it works wonders for me, anyway. In fact, just yesterday I added a PlantUML-backed visual model generating skill to the coding CLI I've been using lately. They get output to a uml folder at the root of my projects.
I'm banking on good old fashioned UML defined as machine-readable text files as one way to remind coding agents of architectural intent in my AI coding-assisted projects.It's exactly the same problem as with humans. The only difference is you can control (willing) people more easily.
I agree. The difference for me is speed and feedback loops though: with AI, architectural drift can accumulate much faster and stay invisible longer. That’s what makes governance feel harder here.