Simple Meta-Harness on Islo.dev

(zozo123.github.io)

28 points | by zozo123-IB 2 hours ago ago

13 comments

  • mccoyb 35 minutes ago ago

    It has now become fashionable to dress oneself in the garb of science to sell dev environments ... for agents.

    It has now become fashionable to claim much, and furnish little.

    It has now become fashionable to fail to understand or state the core of your proposal in as few words as possible: instead of "genetic algorithm applied to the space of harnesses, parallelized by our infrastructure" we get "Three swaps. Same orchestrator. Same dashboard. The wiring is the thing."

    We're cooked chat.

    • adamgold7 28 minutes ago ago

      we need better RL

  • love2read an hour ago ago

    I have no idea what this does or is. I really wish they could have given a better description of why this is useful.

    • antiobli an hour ago ago

      Their lines "A meta-harness is the loop that improves the harness automatically" and "the bottleneck is diagnostic context: most optimizers compress prior runs into summary statistics, while meta-harness gives the proposer up to 10M tokens of raw execution traces to grep through," seem good, no?

      Have to dig into the code, but it looks like they have sound engineering around a "self-improving" agentic coding harness. Will be fun to take the code for a spin.

      • kingstnap 42 minutes ago ago

        10M tokens of raw execution traces to grep through is slop. The tasks are fizzbuzz, palindrome, list reversal, and sum-even. The palindrome challenge is literaly this:

        > Is the word "racecar" a palindrome? Answer with exactly one lowercase word: "yes" or "no". Print only the answer.

    • cyanydeez an hour ago ago

      I find it fascinating, all these attempts are goldmining LLMs with a harness and it's clear they're generating all the docs for AI to read and use, even the docs say "we made a MCP for this!" like some how within 2 years people no longer make choices and it's just like AIs roaming the internet trying on harnesses, etc; certainly that'd be a fascinating reality but the verbosity really is a eye-glazing experience. Who do they expect to read all of that ad copy? It's not me.

  • vmg12 30 minutes ago ago

    This is not how I've seen the term meta-harness be used. The common usage I've seen has been for a meta-harness to be a wrapper around an existing agent to give that agent a new ui or abilities.

  • visarga 21 minutes ago ago

    I did this too, ablating all the components in my coding agent harness. The insight from my meta-optimization loops was "have judge agents review the plan and implementation".

    One of my own insights here is that you need to collect not just execution traces, but all the human-in-the-loop nudges and steering commands. They are one of the purest sources of feedback on coding agents when seen in context.

    I agree with OP on the need to collect traces and compare them, not just scores. It is a much richer source of feedback.

    If anyone is interested I have a slide deck about my approach: https://horiacristescu.github.io/claude-playbook-plugin/docs...

  • m3kw9 an hour ago ago

    This seems to be another over optimization for AI that many are trying to get into. The LLM's improve, and your setup is deprecated, you wasted time optimizing for a slight edge. TDLR: You trade time for slight edge.

  • cyanydeez an hour ago ago

    serious question: I've already got a opencode harness running on a local model. It's easily installable via the insecure bash command. It's already tailored with a couple of plugins and with a proper TODO.md and planning, I can get it to loop fine with proper attention to its pratfalls on vague/non-determinant language. It's all running on a AMD 395+ Qwen3-Coder-Next model with ~256k context. opencode has a webui I can put behind a password protected endpoint and keep it busy from anywhere I need to via a simple nginx proxy.

    How does this go above and beyond this straightforward opensource, open weights and relatively cheap setup? Do you just get more tokens from SOTA models? Can anyone rationally say the products of token production are quality and secure?

    • pohl an hour ago ago

      You know how OpenCode can be prompted to modify itself when you want to improve it in some way? This just automates that kind of thing.

      • cyanydeez 31 minutes ago ago

        It can't actually; I had to create a systemd service that watched the config path and send a signal to reload the files. It roughly works, but it doesn't actually do the loop correctly.

        However, the problem with self-modification is the tendency towards inoperable states. Does it automatically revert when a detrimental state is reached? How does it determine that a modification worked?

        • pohl 25 minutes ago ago

          The paper shows that it can. Take note this seems to be someone’s experiment. If it’s not working for you that’s probably because it’s not a polished product.