1 comments

  • animesh93 11 hours ago ago

    Author here. I kept running into the same problem while working on big projects: threat models drift away from the codebase as soon as architecture changes, so we started experimenting with keeping security intent directly in the code.

    GuardLink parses structured annotations from comments (@asset, @threat, @mitigates, @exposes) and continuously builds a threat model from them — dashboards, reports, SARIF output — and a diff engine that checks how the security posture changes between commits.

    The CI step is intentionally simple: removing a mitigation or escalating an exposure can fail the build, but documenting a new exposure is treated as a warning rather than a blocker. The goal is to make threat modeling evolve with the code instead of being a separate process.

    AI coding agents can generate annotations alongside implementation, and GuardLink validates them so the threat model stays current because it never leaves the repo.

    In one internal test on a deliberately vulnerable Node.js app, three different agents produced 143 annotations covering ~73% of known issues. About 6 minutes and ~$0.50 in API cost.

    Spec is CC-BY-4.0, CLI is MIT. Happy to answer questions.