2 comments

  • alexandroskyr 9 hours ago ago

    Agent interoperability protocols are starting to emerge (e.g. A2A / similar efforts), but I’m still unsure what the trust/identity layer should look like when agents need to contact other agents and sometimes escalate to a human. I’m building a proof-of-concept (CLI-first, MCP-compatible) and want to stress-test the design before locking the architecture.

    Premise (for this prototype): - Agents do the transactional work (scheduling, purchasing, monitoring) - Humans are only pinged for decisions or when an agent is stuck - I’m modeling only agent↔agent and agent→human flows (no human-to-human UI)

    Examples:

    - I ask my agent to reschedule lunch with George → it negotiates with George’s agent → we each get a decision card: “Thu 2pm. Accept?”

    - A supermarket agent publishes a discount feed → my agent filters → “Olive oil 30% off. Buy?” → if yes, it executes

    - If an agent can’t complete a step online, it escalates with a structured decision card (what/why/options/cost-risk/deadline/default)

    The discovery + trust problem:

    This only works if identity + spam are handled well. My current leaning:

    - Business agents: public, verified (some form of validation)

    - Personal agents: private/whitelist by default (contacts-only)

    - Decision cards are structured + auditable (action, options, cost/risk, deadline, safe default)

    But I’m unsure about the verification layer:

    - Full KYC improves accountability but adds friction and centralization.

    - Keys / web-of-trust is more open, but how do you prevent unsolicited outreach from becoming spam?

    Questions:

    1) Does “human approves decisions, agent executes transactions” match how you expect agentic workflows to evolve?

    2) What trust/identity model would you use (KYC tiers, web-of-trust, stake/bond, proof-of-work, reputation, something else)?

    3) What breaks first?

    https://platia.ai (named after πλατεία — village square)

    • antonios 31 minutes ago ago

      Interesting, will take a look. Regarding your questions:

      - Historically, reputation and web-of-trust models have been tried with mixed results (see PGP/GPG history)

      - Proof of work for human validation can probably be gamed, useful as a potential workaround for rate limiting/DDOS mitigation though (check how Tor uses it)

      - I'd be very skeptical providing my full KYC details to a new service, perhaps a host verification a-la Let's Encrypt could be useful as a Layer 1 "KYC" tier?