Cursor and other IDE modality solutions are interesting but train sloppy use of context.
From the extracted prompting Cursor is using:
> Each time the USER sends a message, we may automatically attach some information about their current state…edit history in their session so far, linter errors, and more. This information may or may not be relevant to the coding task, it is up for you to decide.
This is the context bloat that limits effectiveness of LLMs in solving very hard problems.
This particular .env example illustrates the low stakes type of problem cursor is great at solving but also lacks the complexity that will keep SWE’s employed.
Instead I suggest folks working with AI start at chat interface and work on editing conversations to keep clean contexts as they explore a truly challenging problem.
This often includes meeting and slack transcripts, internal docs, external content and code.
There is much missing from this prompt, tool call descriptors is the most obvious. See for yourself using even a year old jailbreak [1]. There’s some great ideas in how they’ve setup other pieces such as cursor rules.
They use different prompts depending on the action you're taking. We provided just a sample because our ultimate goal here is to start A/B testing models, optimizing prompts + models, etc. We provide the code to reproduce our work so you can see other prompts!
The Gist you shared is a good resource too though!
Maybe there is some optimization logic that only appends tool details that are required for the user’s query?
I’m sure they are trying to slash tokens where they can, and removing potentially irrelevant tool descriptors seems like low-hanging fruit to reduce token consumption.
Yes this is one of the techniques apps can use. You vectorize the tool description and then do a lookup based on the users query to select the most relevant tools, this is called pre-computed semantic profiles. You can even hash queries themselves and cache tools that were used and then do similarity lookups by query.
I definitely see different prompts based on what I'm doing in the app. As we mentioned there are different prompts for if you're asking questions, doing Cmd-K edits, working in the shell, etc. I'd also imagine that they customize the prompt by model (unobserved here, but we can also customize per-model using TensorZero and A/B test).
wireshark would work for seeing the requests from the desktop app to Cursor’s servers (which make the actual LLM requests). But if you’re interested in what the actual requests to LLMs look like from Cursor’s servers you have to set something like this up. Plus, this lets us modify the request and A/B test variations!
Sorry, can you explain this a bit more? Either you're putting something between your desktop to the server (in which case Wireshark would work) or you're putting something between Cursor's infrastructure and their LLM provider, in which case, how?
we're doing the latter! Cursor lets you configure the OpenAI base URL so we were able to have Cursor call Ngrok -> Nginx (for auth) -> TensorZero -> LLMs. We explain in detail in the blog post.
Cursor and other IDE modality solutions are interesting but train sloppy use of context.
From the extracted prompting Cursor is using:
> Each time the USER sends a message, we may automatically attach some information about their current state…edit history in their session so far, linter errors, and more. This information may or may not be relevant to the coding task, it is up for you to decide.
This is the context bloat that limits effectiveness of LLMs in solving very hard problems.
This particular .env example illustrates the low stakes type of problem cursor is great at solving but also lacks the complexity that will keep SWE’s employed.
Instead I suggest folks working with AI start at chat interface and work on editing conversations to keep clean contexts as they explore a truly challenging problem.
This often includes meeting and slack transcripts, internal docs, external content and code.
I’ve built a tool for surgical use of code called FileKitty: https://github.com/banagale/FileKitty and more recently slackprep: https://github.com/banagale/slackprep
That let a person be more intentional about what the problem they are trying to solve by only including information relevant to the problem.
There is much missing from this prompt, tool call descriptors is the most obvious. See for yourself using even a year old jailbreak [1]. There’s some great ideas in how they’ve setup other pieces such as cursor rules.
[1]: https://gist.github.com/lucasmrdt/4215e483257e1d81e44842eddb...
https://github.com/elder-plinius/CL4R1T4S/blob/main/CURSOR/C...
They use different prompts depending on the action you're taking. We provided just a sample because our ultimate goal here is to start A/B testing models, optimizing prompts + models, etc. We provide the code to reproduce our work so you can see other prompts!
The Gist you shared is a good resource too though!
Maybe there is some optimization logic that only appends tool details that are required for the user’s query?
I’m sure they are trying to slash tokens where they can, and removing potentially irrelevant tool descriptors seems like low-hanging fruit to reduce token consumption.
Yes this is one of the techniques apps can use. You vectorize the tool description and then do a lookup based on the users query to select the most relevant tools, this is called pre-computed semantic profiles. You can even hash queries themselves and cache tools that were used and then do similarity lookups by query.
I definitely see different prompts based on what I'm doing in the app. As we mentioned there are different prompts for if you're asking questions, doing Cmd-K edits, working in the shell, etc. I'd also imagine that they customize the prompt by model (unobserved here, but we can also customize per-model using TensorZero and A/B test).
Soooo.... wireshark is no longer available or something?
wireshark would work for seeing the requests from the desktop app to Cursor’s servers (which make the actual LLM requests). But if you’re interested in what the actual requests to LLMs look like from Cursor’s servers you have to set something like this up. Plus, this lets us modify the request and A/B test variations!
Sorry, can you explain this a bit more? Either you're putting something between your desktop to the server (in which case Wireshark would work) or you're putting something between Cursor's infrastructure and their LLM provider, in which case, how?
we're doing the latter! Cursor lets you configure the OpenAI base URL so we were able to have Cursor call Ngrok -> Nginx (for auth) -> TensorZero -> LLMs. We explain in detail in the blog post.
Ah OK, I saw that, but I thought that was the desktop client hitting the endpoint, not the server. Thanks!
The article literally says at the end this was just the first post about looking before getting into actually changing the responses.
(that being said, mitmproxy has gotten pretty good for just looking lately https://docs.mitmproxy.org/stable/concepts/modes/#local-capt... )
Hmm, now that we have the prompts, would it be possible to reimplement Cursor servers and have a fully local (ahem pirated) version?