Maybe you mentioned it in your demo and I missed it, but how does this differ pasting the log messages to ChatGPT / Claude / another LLM? Is it mainly that yours can iterate over a large logfile without blowing up the context window?
Does it suffer from the same issue as other LLMs, where it will always identify potential optimizations or improvements even if none are truly needed?
> Maybe you mentioned it in your demo and I missed it, but how does this differ pasting the log messages to ChatGPT / Claude / another LLM? Is it mainly that yours can iterate over a large logfile without blowing up the context window?
We do quite a bit of aggregation over the log file, and generate summary stats and choose what bits to stuff in the LLM. Plan to support more platforms than just spark.
> Does it suffer from the same issue as other LLMs, where it will always identify potential optimizations or improvements even if none are truly needed?
Funnily enough, instructing sonnet-3.7 to not suggest unnecessary optimisations seems to have done the trick!
Maybe you mentioned it in your demo and I missed it, but how does this differ pasting the log messages to ChatGPT / Claude / another LLM? Is it mainly that yours can iterate over a large logfile without blowing up the context window?
Does it suffer from the same issue as other LLMs, where it will always identify potential optimizations or improvements even if none are truly needed?
> Maybe you mentioned it in your demo and I missed it, but how does this differ pasting the log messages to ChatGPT / Claude / another LLM? Is it mainly that yours can iterate over a large logfile without blowing up the context window?
We do quite a bit of aggregation over the log file, and generate summary stats and choose what bits to stuff in the LLM. Plan to support more platforms than just spark.
> Does it suffer from the same issue as other LLMs, where it will always identify potential optimizations or improvements even if none are truly needed?
Funnily enough, instructing sonnet-3.7 to not suggest unnecessary optimisations seems to have done the trick!
fellow co-founder here! One fun thing about this project is the entire frontend was vibe-coded using Bolt in a few days.
Very awesome. Not having to burn time on a UI that looks and feels nice is a huge win.
Also curious how the agent works?
[dead]