Inventing a new thing "for agents" always feels counter-productive. Your new thing isn't in the training data, so you have to teach it how to use your thing. Why not use tech that's already in the training data? Agents know Python and Django. Or, better (because the performance, maintainability, and deployment story are much nicer with no extra work, since agents write the code), agents know Go.
The very nature of LLMs means you can't invent a thing for current agents to use that they'll be better at using than the things they already know how to use from their immense training data. You can give them skills, sure, and that's useful, but it's still not their native tongue.
To make a thing that's really for agents, you need to have made a popular thing for humans ten years ago, so there's a shitload of code and documentation for them to train on.
this was true a year ago, but if you give an agent a new spec to follow (e.g. a .md file), it will follow it.
we have a custom .yaml spec for data pipelines in our product and the agent follows it as well as anything in the training data.
while I agree you don't need to build a new thing "for agents", you can get them to understand new things, that are not in the training data, very easily.
Something like this has been on my mind for a while. When using LLMs for coding I believe it is a significant benefit, if the amount of lines to be reviewed by humans is as small as possible. An app, which is not much more than a configuration in a dense, custom made DSL with minimal coding to specify business logic would be the simplest artifact that a human can review quickly and an LLM can manipulate with ease (provided there's good docs / linting / errors / maybe even a finetuned model at some point).
Everything which just works "by convention" or by "opinionated defaults" (allowing a tightly coupled but very feature rich framework) helps to reduce the noise / lines that needs to be reviewed.
While this approach might not be optimal for every project, I'm certain the opinionated defaults can work for many endeavours. And the reduction of complexity might be one important aspect, which can make an "agentically engineered" project sustainable.
>Everything which just works "by convention" or by "opinionated defaults" (allowing a tightly coupled but very feature rich framework) helps to reduce the noise / lines that needs to be reviewed.
This is exactly why I've gone back to Ruby with Sinatra or Rails for my personal side projects, despite Ruby's horrid performance.
As long as you are content to remain on e.g. Rails' "Happy Path", then I've found agents do a fantastic job because there's lots of Ruby in the training set and there's less surface area where a context mismatch/hallucination can end up going off the rails. Pun only partially intended.
As someone who is leaning into this vibe coding thing recently, kind of interested to know the sentiment here. What was the tell? Like can you give line numbers or some reference. Feel like 100% certified organic code is a pretty high bar going forward.
I'm not against this idea in principle, but I'm also not sure why that is better than what's already out there, except maybe you save some tokens by not vibe coding this yourself?
I do think in the future we'll see some novel libraries that are agent-optimized first. I'm not sure if this is it, though.
The models are training on examples, and there are a lot of Django examples to learn from. Where is the advantage here? A surface for more potential bugs?
Inventing a new thing "for agents" always feels counter-productive. Your new thing isn't in the training data, so you have to teach it how to use your thing. Why not use tech that's already in the training data? Agents know Python and Django. Or, better (because the performance, maintainability, and deployment story are much nicer with no extra work, since agents write the code), agents know Go.
The very nature of LLMs means you can't invent a thing for current agents to use that they'll be better at using than the things they already know how to use from their immense training data. You can give them skills, sure, and that's useful, but it's still not their native tongue.
To make a thing that's really for agents, you need to have made a popular thing for humans ten years ago, so there's a shitload of code and documentation for them to train on.
this was true a year ago, but if you give an agent a new spec to follow (e.g. a .md file), it will follow it.
we have a custom .yaml spec for data pipelines in our product and the agent follows it as well as anything in the training data.
while I agree you don't need to build a new thing "for agents", you can get them to understand new things, that are not in the training data, very easily.
It looks like it's a fork of Django that just kinda changed a bunch of stuff arbitrarily?
From the readme: Plain is a fork of Django, driven by ongoing development at PullApprove — with the freedom to reimagine it for the agentic era.
Very likely its being changed by an AI model, driven by human prompts.
That would be good if the changes are to slim it down by 80%.
Something like this has been on my mind for a while. When using LLMs for coding I believe it is a significant benefit, if the amount of lines to be reviewed by humans is as small as possible. An app, which is not much more than a configuration in a dense, custom made DSL with minimal coding to specify business logic would be the simplest artifact that a human can review quickly and an LLM can manipulate with ease (provided there's good docs / linting / errors / maybe even a finetuned model at some point).
Everything which just works "by convention" or by "opinionated defaults" (allowing a tightly coupled but very feature rich framework) helps to reduce the noise / lines that needs to be reviewed.
While this approach might not be optimal for every project, I'm certain the opinionated defaults can work for many endeavours. And the reduction of complexity might be one important aspect, which can make an "agentically engineered" project sustainable.
>Everything which just works "by convention" or by "opinionated defaults" (allowing a tightly coupled but very feature rich framework) helps to reduce the noise / lines that needs to be reviewed.
This is exactly why I've gone back to Ruby with Sinatra or Rails for my personal side projects, despite Ruby's horrid performance.
As long as you are content to remain on e.g. Rails' "Happy Path", then I've found agents do a fantastic job because there's lots of Ruby in the training set and there's less surface area where a context mismatch/hallucination can end up going off the rails. Pun only partially intended.
Nice. Love the idea behind this. I have been using Django for most of my vibe coded side projects just for the reasons stated in this thesis.
Django code is pretty easy to review quickly. LLMs are good at writing it.
Django is just old and bloated, so the fork is a good idea. Maybe I will use this for my next side project.
It’s vibe-coded, too. Pass.
As someone who is leaning into this vibe coding thing recently, kind of interested to know the sentiment here. What was the tell? Like can you give line numbers or some reference. Feel like 100% certified organic code is a pretty high bar going forward.
TLDR:
- fork of django
- it's opinionated
- typed
- comes with skills / rules / docs baked in
I'm not against this idea in principle, but I'm also not sure why that is better than what's already out there, except maybe you save some tokens by not vibe coding this yourself?
I do think in the future we'll see some novel libraries that are agent-optimized first. I'm not sure if this is it, though.
(edit: formatting)
The models are training on examples, and there are a lot of Django examples to learn from. Where is the advantage here? A surface for more potential bugs?
How does this compare to FastAPI + SQLModel?
So a sloppified Django spit out by Claude? Good luck with that.