The Last Year of Localhost

(ona.com)

6 points | by sidk24 6 hours ago ago

4 comments

  • mootoday 3 hours ago ago

    You don't need a vendor-locked solution for AI agents to work in their own isolated environments :-).

    - https://sprites.dev - https://mise.jdx.dev - https://fnox.jdx.dev

    Develop a small API that accepts a prompt, starts a sprite sandbox, and then you wait for the AI agent to open a pull request.

    I developed that in less than a week and now my client owns the code, can customize it to their needs, and if they want to use a different compute environment, they can with minimal code changes needed.

  • sidk24 6 hours ago ago

    I work at Ona (formerly Gitpod). Background agents from Stripe, Ramp, and Shopify are shipping real code in production, but they all depend on one thing: a fully automated, reproducible dev environment. No localhost, no local state, no "works on my machine."

    The companies moving fastest right now have all done the boring work first: to standardize their dev environments. The agent harness is a thin layer on top.

    Let us know what you think!

  • dascrazy_96 5 hours ago ago

    Unpopular opinion: the mass resistance to cloud dev environments over the past 5 years was never really about latency or performance. It was identity. Developers treat their local setup the way car guys treat their garage builds. It's theirs, they tuned it, and suggesting a managed alternative feels like an insult to their craft.

    But the article kind of buries the real lede here. The interesting claim isn't "localhost is dead." It's that environment standardization is the actual bottleneck for agent adoption, and most companies don't even realize it yet. Everyone's blaming model capability when the problem is they literally cannot give an agent a working dev environment programmatically.

    I watched three teams at my company try to run background agents this quarter. All three hit the exact same wall. The agent generates code fine, but it can't run the test suite because nobody ever bothered to make the environment reproducible. One team spent two weeks just getting a second copy of the dev environment running on the same machine. The agent part took a day.

    Kind of ironic that the "works on my machine" problem everyone joked about for 20 years is now the thing blocking the biggest productivity shift since version control. Except now it's "works on my machine, but I can't describe my machine to a computer."

    Genuinely curious how many people here have actually tried running 3+ agents in parallel on a monorepo locally. The git worktree section rang painfully true for me. Port conflicts, shared caches stomping on each other, laptop sounds like it's trying to achieve liftoff. At what point do we just admit that "buy more RAM" isn't an architecture?

    • sidk24 5 hours ago ago

      Real talk!! This isn’t about latency, it’s about authorship

      “My machine” is identity and control. That worked when the human was the execution engine. With agents, undocumented setup turns from annoyance into hard failure.

      But that agent workflows expose environment debt the way CI exposed testing debt.

      If an environment can’t be provisioned programmatically, it’s not infrastructure, it’s folklore.