12 comments

  • codelion 6 days ago ago

    Thanks for checking this out! A few additional details that didn't fit in the main post:

    The system maintains two separate limits: a storage limit (max 10 strategies per problem type in the database) and an inference limit (max 3 strategies applied per query). This keeps the database manageable while ensuring the system prompt doesn't get too long.

    One interesting finding was that strategies only get used for inference once they have at least 5 attempts and a 40% success rate. This prevents the system from applying unproven strategies to new problems.

    The approach works particularly well with reasoning models like DeepSeek-R1 and QwQ - the learned strategies seem to guide their thinking process effectively.

    I'm especially curious about:

    1. How this might work with different model families

    2. Whether the community sees value in sharing strategy databases between users

    3. Ideas for extending beyond text-based reasoning to multimodal problems

    The plugin integrates with our broader optillm project which has other inference optimization techniques. You can combine SPL with methods like mixture-of-agents or MCTS using the "&" operator.

    Next I'm thinking about meta-learning - having the system learn how to create better strategies more efficiently. Also exploring collaborative strategy sharing.

    Would love to hear thoughts on the approach or if anyone has ideas for other problem domains where this might be useful!

  • yunusabd 6 days ago ago

    That's an interesting space to explore! I'm wondering about the baseline in the benchmarks. Which prompts did you use for those? I'm asking because some of the resulting prompts seem fairly generic, and I'm wondering if you could just blanket add them to each prompt and also see an improvement. Things like "Identify the question (what are you trying to find?)".

    In the same vein, wouldn't it be interesting to measure which part of the prompt most contributed to better solving the problem? Surely some parts will be just noise and can be trimmed away.

    Also wondering what this does, since the model probably won't (can't?) actually read the problem multiple times:

      > Read the problem carefully (multiple times).
    • codelion 6 days ago ago

      Re-reading the problem apparently works well - https://arxiv.org/abs/2309.06275

      Here the system seems to have discovered this strategy by itself. The prompts are generic because during learning there is a part to refine and combine them. I haven’t experimented yet by adding all prompts to every query, given the large context it will be interesting to see.

      • yunusabd 6 days ago ago

        Okay, but it looks like in the paper, they are actually adding the question twice in the prompt, not just instructing the model to read it twice. Or am I missing something?

  • tanchaowen84 6 days ago ago

    This is a really cool idea! I recently came across another project on GitHub: https://github.com/tensorzero/tensorzero that explores a similar direction. You might find it interesting, and perhaps it could offer some inspiration or useful insights for your work as well.

  • dedicate 6 days ago ago

    If I jump in and, say, manually 'tweak' one of those JSON strategies because I think I have a better idea, what happens next? Does the LLM just roll with my brilliant human intervention, or could it eventually 'learn' that my tweak was actually counterproductive and refine it back (or away from my edit)?

    • codelion 6 days ago ago

      You can run in two modes, by default you run in the inference mode without learning. So, the changes you made will be used. If you switch to learning mode then the strategies are updated/refined and merged based on a config that you can control.

      # How often to perform maintenance operations (merge, prune)

      MAINTENANCE_INTERVAL = 40

      # Strategy selection thresholds

      STRATEGY_CREATION_THRESHOLD = 0.7 # Higher threshold to avoid creating similar strategies

      STRATEGY_MERGING_THRESHOLD = 0.6 # Lower threshold to merge more similar strategies

      MIN_SUCCESS_RATE_FOR_INFERENCE = 0.4 # Minimum success rate for a strategy to be used during inference

      The configs are all defined here - https://github.com/codelion/optillm/blob/main/optillm/plugin...

  • imaltont 6 days ago ago

    You should take a look at something called Case-based reasoning. Seems to perfectly fit into the road you are currently walking, as you basically just rediscovered the CBR-cycle.

  • Falimonda 6 days ago ago

    How do you forsee a system like this efficiently managing and relying on a set of strategies whose size can become unbounded?

  • ramonga 6 days ago ago

    I would like to see some interesting input/output pairs. Do you have any?

    • codelion 6 days ago ago

      We have some examples in the plugin README: https://github.com/codelion/optillm/tree/main/optillm/plugin...

      E.g. This was the strategy discovered by optiLLM for solving word problems:

      *Refined Strategy for Solving Word Problems:*

      1. *Understand:*\n * Read the problem carefully (multiple times).\n * Identify the question (what are you trying to find?).\n * List all given information (facts, numbers, units).\n * Clarify ambiguous terms/units.

      2. *Organize Information & Identify Unknowns:*\n * Choose an organization method: (e.g., table, diagram, list, drawing).\n * Clearly identify the unknowns (what you need to solve for).

      3. *Plan and Translate:*\n * Define all variables with units (e.g., `p = number of pennies`, `c = number of compartments`).\n * Identify relationships between knowns and unknowns.\n * Convert units if necessary.\n * Write equations or expressions, including units, that relate the knowns and unknowns.\n * Ensure units are consistent throughout the equations.\n * Outline the solution steps.

      4. *Solve:*\n * Show work step-by-step.\n * Track units throughout calculations.\n * Calculate accurately.\n * Solve for the unknowns.\

      5. *Evaluate and Verify:*\n * Check if the answer is reasonable.\n * Verify the answer.

      6. *Summarize:*\n * State the answer with units

      Full list of strategies discovered is available here -https://github.com/codelion/optillm/blob/main/optillm/plugin...