1 comments

  • walmsles 10 hours ago ago

    I built a library that uses an LLM as an orchestrator to coordinate multiple agents at runtime. You define what each agent does in markdown files using RFC 2119 constraints (MUST, SHOULD, MAY), and the orchestrator figures out who to call and when based on the user's request.

    This builds on AWS Strands Agent SOPs (markdown format for agent workflows released in November). The difference: instead of manually chaining agents or defining explicit flows, the orchestrator reads available agent capabilities and decides the execution path dynamically.

    Add a new agent by dropping in a markdown file. No code changes to coordination logic.

    The bet: LLMs are better at runtime orchestration than developers are at predicting workflows upfront, especially when requirements change. Natural language is more maintainable when both producers (agent authors) and consumers (orchestrator) are LLMs.

    Built on AWS Strands SDK and Bedrock with Claude models. Using this in a technical bootcamp next week to teach students complex agent workflows without coordination code.

    GitHub: https://github.com/serverless-dna/sop-agents npm: https://www.npmjs.com/package/@serverless-dna/sop-agents AWS Strands blog: https://aws.amazon.com/blogs/opensource/introducing-strands-...