I built a workflow (packaged as a Claude Code skill) called "showcase-tuning"
for iterating on rendering/visual code with AI assistance.
The core idea: instead of describing UI bugs in text, you create a minimal
"showcase" - an isolated, runnable version of just the visual component
you're tuning. The AI can then iterate on it rapidly without side effects
from the rest of the codebase.
It's framework-agnostic but works especially well for Android/mobile UI,
canvas rendering, and anything where "you'll know it when you see it."
The repo includes the skill file for Claude Code so you can drop it straight
into your workflow.
Would love feedback on the concept and whether others have tackled visual
debugging with AI differently.
I built a workflow (packaged as a Claude Code skill) called "showcase-tuning" for iterating on rendering/visual code with AI assistance.
The core idea: instead of describing UI bugs in text, you create a minimal "showcase" - an isolated, runnable version of just the visual component you're tuning. The AI can then iterate on it rapidly without side effects from the rest of the codebase.
It's framework-agnostic but works especially well for Android/mobile UI, canvas rendering, and anything where "you'll know it when you see it."
The repo includes the skill file for Claude Code so you can drop it straight into your workflow.
Would love feedback on the concept and whether others have tackled visual debugging with AI differently.