I like that agent-shell just uses comint instead of a full vterm, but I find myself missing a deeper integration with claude that claude-code-ide has. Like with claude-code-ide you can define custom MCP tools that run Emacs commands.
I spent some time trying to understand what OpenCode.nvim gave me, could do for me. It felt mostly like ways to take nvim things and inject them into OpenCode. Which was fine I guess. I'm probably underselling it, but I was hoping for more, and it never really clicked.
https://github.com/nickjvandyke/opencode.nvim
I find myself spending much more time in OpenCode than in nvim these days. With mcp-neovim-server, it's super easy to keep vim open & ask OpenCode to show me, to open files, go to lines. This didn't require any nvim tweaking at all, it's just giving the LLM access to my nvim. It is absolutely wild how good glm-4.7 has been at opening friendly splits, at debugging really gnarly wild nvim configuration problems that have plagued me for years. It knows way way way more nvim than I do, and that somehow surprised me.
https://github.com/bigcodegen/mcp-neovim-server
Definitely interest in the ACP angle. I feel like we're in a weird spot where ACP is this protocol where the thing you do use talks to the headless thing you don't ever see. I'd love to know or see more than that. These connections feel 1:1, but I want to see human interaction in every agentic system, not for there to be this me -> ide -> ACP agent flow with the ide intermediating all, being the sole UI. It should be able to do that yes!! But I also want an expectation that there can be multiple forces "driving" an ACP service.
I've used chatgpt-shell, but I have since turned my LLM usage to gptel inside org-mode buffers. Every day I use org-roam-dailies-goto-today to make a new file and turn on gptel (the use of org-roam-dailies is 100% optional). Then I do my interactions with gptel in here, using top level bullets and setting topics to limit context.
I have 10 months of chats, and now I can analyze them. I even had claude code write me up a program do that: https://github.com/ryanobjc/dailies-analyzer - the use of gptel-mode allows me to know which parts of the file are LLM output and which I typed in, via a header in the file.
Keeping your own data as plain text has huge benefits. Having all my chats persistent is good. It's all private. I could even store these chats into a file.gpg and emacs will auto encrypt-decrypt it. Gptel and the LLM only gets the text straight out of emacs, and knows nothing about the encryption.
I found this better than the 'shell' type packages, since they don't always keep context, and are ultimately less flexible than a file as an interaction buffer. I described how I have this set up here: https://gist.github.com/ryanobjc/39a082563a39ba0ef9ceda40409...
All of this setup is 100% portable across every LLM backend gptel supports, which is basically all of them, including local models. With local models I could have a fully private and offline AI experience, which quality based on how much model I can run.
There's also https://github.com/manzaltu/claude-code-ide.el if you're just using claude code.
I like that agent-shell just uses comint instead of a full vterm, but I find myself missing a deeper integration with claude that claude-code-ide has. Like with claude-code-ide you can define custom MCP tools that run Emacs commands.
I spent some time trying to understand what OpenCode.nvim gave me, could do for me. It felt mostly like ways to take nvim things and inject them into OpenCode. Which was fine I guess. I'm probably underselling it, but I was hoping for more, and it never really clicked. https://github.com/nickjvandyke/opencode.nvim
I find myself spending much more time in OpenCode than in nvim these days. With mcp-neovim-server, it's super easy to keep vim open & ask OpenCode to show me, to open files, go to lines. This didn't require any nvim tweaking at all, it's just giving the LLM access to my nvim. It is absolutely wild how good glm-4.7 has been at opening friendly splits, at debugging really gnarly wild nvim configuration problems that have plagued me for years. It knows way way way more nvim than I do, and that somehow surprised me. https://github.com/bigcodegen/mcp-neovim-server
Definitely interest in the ACP angle. I feel like we're in a weird spot where ACP is this protocol where the thing you do use talks to the headless thing you don't ever see. I'd love to know or see more than that. These connections feel 1:1, but I want to see human interaction in every agentic system, not for there to be this me -> ide -> ACP agent flow with the ide intermediating all, being the sole UI. It should be able to do that yes!! But I also want an expectation that there can be multiple forces "driving" an ACP service.
I've used chatgpt-shell, but I have since turned my LLM usage to gptel inside org-mode buffers. Every day I use org-roam-dailies-goto-today to make a new file and turn on gptel (the use of org-roam-dailies is 100% optional). Then I do my interactions with gptel in here, using top level bullets and setting topics to limit context.
I have 10 months of chats, and now I can analyze them. I even had claude code write me up a program do that: https://github.com/ryanobjc/dailies-analyzer - the use of gptel-mode allows me to know which parts of the file are LLM output and which I typed in, via a header in the file.
Keeping your own data as plain text has huge benefits. Having all my chats persistent is good. It's all private. I could even store these chats into a file.gpg and emacs will auto encrypt-decrypt it. Gptel and the LLM only gets the text straight out of emacs, and knows nothing about the encryption.
I found this better than the 'shell' type packages, since they don't always keep context, and are ultimately less flexible than a file as an interaction buffer. I described how I have this set up here: https://gist.github.com/ryanobjc/39a082563a39ba0ef9ceda40409...
All of this setup is 100% portable across every LLM backend gptel supports, which is basically all of them, including local models. With local models I could have a fully private and offline AI experience, which quality based on how much model I can run.