You are working against LLM attention. A LLM looks at a conversation and focuses on its attention points. Usually the start and end. Your previous work falls into the out of attention space and gets nuked.
If your asking how to have everything attention we currently can't.
I'm using an editor called Zed and it has an option to create a "new thread from summary" It also shows at the top of the screen how many tokens I have used out of total available so with the combination of that I think it is best to create a new "chat" periodically with a summary.
My code (that ChatGPT writes for me is from 500 to 1000 lines). Every 5-7 versions, it starts messing things up.
I keep the working versions on a Word file on a Landscape, A3, 3 columns (version number, comment/changelog, the_code)(yes, cheap, scalable, easy).
So, every 5-7 versions, I start a new chat. I ask ChatGPT to read/write a summary/description of the code, and then I proceed to ask it for new changes/enhancements.
I use .md files to keep Cursor on track, the flow I use is something like...
Define a feature in detail (using trascription) -> Get o3 or Gemini 2.5 pro to break it down into very small testable tasks. -> review this -> then paste into a tasks.md file -> write and architecture.md file or similar for any additional context needed. -> then prompt Cursor to work through tasks.md step by step.
This keeps it on track, with the whole feature defined from the outset.
But eventually... it will try to ignore the dockerfile and setup up locally, create multiple .env files, write code with placeholders, ignore a files it's just created and written...
It's impossible to get it back on track - it gets into a debug loop of making things worse rather than getting back on track.
oh, gotcha. Interesting approach with .md files! I also have problems with agent getting carried away and starting to create documentation, test files, etc. mostly notice this with Claude 4.
It's just cursor's system prompt problem. they just need time to "tame" the model after release.
for now, I just make sure that in every chat thread i have "DO NOT WRITE ANY DOCUMENTATION OR TEST OR ANYTHING THAT WASN'T EXPLICITLY ASKED. STAY LEAN"
But i sort of reached the point where I don't mind claude going off the rails a bit. Like restricting it with .md and constantly updating those guardrails sounds like more of a burden than help.
its just the prompt problem. try reading the chat and every time you see it doing excessive shit stop the chat and slap it on wrist saying "never create .env i already have it, you just don't have access, etc".
also, sounds obvious, but don't forget to create new conversations often. the "ignore a files it's just created" sounds like context window overload. 200k window for a new model sounds like a crime from anthropic
You can ask cursor itself to create and update the tasks.md file.
Tell it to remove the task from the file after it's done, then do a commit. That way if it screws up at some point you can checkout the last good commit and start from there in a new chat.
You are working against LLM attention. A LLM looks at a conversation and focuses on its attention points. Usually the start and end. Your previous work falls into the out of attention space and gets nuked.
If your asking how to have everything attention we currently can't.
Damn...
So you're saying I need some adderral.ai
I'm using an editor called Zed and it has an option to create a "new thread from summary" It also shows at the top of the screen how many tokens I have used out of total available so with the combination of that I think it is best to create a new "chat" periodically with a summary.
Man, that sounds like a great feature!! Will check it out
My code (that ChatGPT writes for me is from 500 to 1000 lines). Every 5-7 versions, it starts messing things up.
I keep the working versions on a Word file on a Landscape, A3, 3 columns (version number, comment/changelog, the_code)(yes, cheap, scalable, easy).
So, every 5-7 versions, I start a new chat. I ask ChatGPT to read/write a summary/description of the code, and then I proceed to ask it for new changes/enhancements.
Wait until you learn about git
Yeh, a free GitHub and breaking the code into small functions would 10x your flow here
can you explain a bit more what do you mean by burning down? and what do you use .md files for? Documenting the code?
I use .md files to keep Cursor on track, the flow I use is something like...
Define a feature in detail (using trascription) -> Get o3 or Gemini 2.5 pro to break it down into very small testable tasks. -> review this -> then paste into a tasks.md file -> write and architecture.md file or similar for any additional context needed. -> then prompt Cursor to work through tasks.md step by step.
This keeps it on track, with the whole feature defined from the outset.
But eventually... it will try to ignore the dockerfile and setup up locally, create multiple .env files, write code with placeholders, ignore a files it's just created and written...
It's impossible to get it back on track - it gets into a debug loop of making things worse rather than getting back on track.
oh, gotcha. Interesting approach with .md files! I also have problems with agent getting carried away and starting to create documentation, test files, etc. mostly notice this with Claude 4.
It's just cursor's system prompt problem. they just need time to "tame" the model after release.
for now, I just make sure that in every chat thread i have "DO NOT WRITE ANY DOCUMENTATION OR TEST OR ANYTHING THAT WASN'T EXPLICITLY ASKED. STAY LEAN"
But i sort of reached the point where I don't mind claude going off the rails a bit. Like restricting it with .md and constantly updating those guardrails sounds like more of a burden than help.
its just the prompt problem. try reading the chat and every time you see it doing excessive shit stop the chat and slap it on wrist saying "never create .env i already have it, you just don't have access, etc".
also, sounds obvious, but don't forget to create new conversations often. the "ignore a files it's just created" sounds like context window overload. 200k window for a new model sounds like a crime from anthropic
You can ask cursor itself to create and update the tasks.md file.
Tell it to remove the task from the file after it's done, then do a commit. That way if it screws up at some point you can checkout the last good commit and start from there in a new chat.
One tip I have found - start new conversations windows when changing focus, so it doesn’t refer to history and make wild assumptions.
Yeh, I've found this helpful too, mentally if feels like a commit or PR, all the code for one thing in one chat then a new chat for new things
[dead]