> Putting aside existential risks, I don't see a future where a lot of jobs don't cease to exist.
I'm personally betting on the plateau effect with LLMs. There are two plateaus I see coming that will require humans to fix no matter what we do:
1. The LLMs themselves plateau. We're already seeing new models get worse, not better at writing code (e.g., Sonnet 3.5 seems to be better than 3.7 at coding). This could be a temporary fluke, or, an inherent reality of how LLMs work (where I tend to land).
2. Humans will plateau. First, humans themselves will see their skills atrophy as they defer more and more to AI than struggling to solve problems (and by extension, learn new things). Second, humans will be disincentivized to create new forms of programming and write about them, so eventually the inputs to the LLM become stale.
Short-term, this won't appear to be true, but long-term (on the author's 10+ year scale), it will be frightening. Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways that the few remaining humans responsible for maintaining them don't understand (and can't prompt their way out of).
And that's where I think the future work will be: in fixing or replacing systems unintentionally being broken by the use of AI. So, you'll either be an "AI mess fixer" or more entrepreneurial doing "artisan, hand-crafted software."
On your second point - I don’t agree that humans in general will plateau. I think instead the _gap_ between humans who crave to create and learn, and those who are ostensibly potatoes, will be magnified.
I see it a bit like the creator economy, where you have these maker vs consumer tranches of people.
Humans are fundamentally creators of tools and art and imaginary world - this is one of the factors that distinguishes us from animals. For most of human history, most humans spent a lot of their time creating. The very recent phenomena of a large fraction of humans creating almost nothing in their adult life is caused by modern economic systems, almost all of which force people to work so much every day that all their creative energies are sapped.
Go to a 4 day 24 hour work week and you will find almost everyone creating again.
Throughout all stages of my schooling from kindergarten to college, there were always artists, tinkerers and builders, but they were never the majority.
I can't think of any time in history, under any economic system, where humans who create tools and art were the majority. Most people just want to have families and enjoy life.
talk to people, pretty much everybody around you wants to make stuff.
be it music, cooking, drawing, house arrangement, get better at fishing, etc.
people with long gaps of "pure consumption" get depressed... (which is why i think it's important to protect kids from mobile phones, they're depression machines)
This feels like a "if i say it enough, people will agree and it will be true" kind of comment. Almost none of these propositions check out or even make sense. I literally can't distinguish between reddit commenters and HN commenters. An unoriginal HN complaint but frustrating to witness over time.
1. Plateau != Regress. Why point to regressions as evidence of plateau? Why only look at a single model and minor version? we are clearly still at AI infancy, regressions are to be expected from time to time.
2. Where's the evidence of this? Humans are using AI to branch out and dip their toes into things that they wouldn't have fathomed doing before. How would that lead you to "disincentivized"?
> Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways
So in this fantasy everybody is vibe coding resilient code/systems that lasts for 10+ years and everybody stops learning how to code, and after a decade or so, they start breaking and everybody is in trouble? This world you're creating wouldn't stand up to the critique of sci-fi readers.
I'm sorry but if we can vibe code systems that last 10+ years and nobody is learning anything because they are performing so well, then that's a job well done by OpenAI and co. We're probably set as a civilization.
That’s an uninformed perspective. We can’t be ”set as a civilization” by locking in whatever our current progress is. ML models don’t inherently progress anything. So yes if we stop doing that then in 10 years people will not have stopped learning how to code, but possibly stopped actually thinking for themselves which in turn would lead to not progressing neither our ML models or civilization.
Like I said, in the short-term this will sound false, but in the long-term I expect it to be frighteningly accurate.
> I literally can't distinguish between reddit commenters and HN commenters.
No need to condescend. I have a fair amount of experience building with and using these tools daily. I'm not just some "reddit idiot."
> So in this fantasy everybody is vibe coding code that lasts for 10+ years and everybody stops learning how to code
I'm extrapolating. Look at what happened in the wake of the industrial revolution. Most people don't know how to fix or create anything today, and instead, rely on fast-and-cheap products or services made by or offered by other people. Hence the panic over China and tariffs. The AI-ification of everything is just another, modern version of a similar thing.
I could absolutely be wrong (and hope I am). But when you track human laziness over time, it leads to deterioration and incompetence. I view this as a gradually than all of a sudden type of problem. One that will be incredibly difficult to dig ourselves out of later.
> I'm extrapolating. Look at what happened in the wake of the industrial revolution. Most people don't know how to fix or create anything today
Most people didn't know how to fix or create anything back then either. Except now we have more productivity than ever, more people working than ever, more output than ever.
People are fixing and creating out the ass in this society. We might not all be factory workers but people are making a ton of things in general. There is more music being made than ever, more movies being made than ever, more small businesses etc.
There is more information about how to fix things disseminated to the general population now than ever. It's just that what we build now is often so incredibly complex that fixing it is non-trivial or impractical. That's not a regression of society or our abilities or interests. There are videos on tiktok about fixing electric toothbrushes that have over 100k views and over a thousand comments. https://www.tiktok.com/@thetruestreviews/video/7458130570321...
None of what you say checks out and starting off with what is basically "it doesn't make any sense now but i predict in 10 years it will make sense" is a lazy way to defend your point.
> But when you track human laziness over time, it leads to deterioration and incompetence.
Again, none of this tracks with reality. Who is tracking laziness? In your world the general population is lazy and incompetent yet we are generally producing more and still advancing STEM
Lazy comments like this shouldn't be accepted in this community. If you're going to post something at least explain why you think it is such. What is rational, what is plausible about their comment?
Yup. The people who fit this role well will be the types that do this work for fun anyways, purely out of enjoyment or curiosity. I don't expect those types to completely disappear, they'll just be incredibly rare (i.e., the Pareto Principle or some bastardization of it).
Why do articles like this always say things like "I've used LLMs to get some stuff done faster" and then go on to describe how LLMs get them to spend more time and money to do a worse job? You don't need LLMs to frustrate you into lowering your standards, the power to do that was within you all along.
Much of this feels like when they did studies on people who take mushrooms for example feel like they are more productive, but when you actually measure it they aren't. It's just their perception.
To me the biggest issue is that search has been gutted out and so for many questions the best results come from asking an LLM.
But this is far different from using it to generate entire codebases.
The "glue" comment here reflects a view from someone who does mostly software work. That's been the situation since mechanized production lines were first built. The job of the humans is not direct labor. It's to monitor the machinery, restart it, and fix it.
Power looms were probably the first devices like this. Somebody has to thread the loom, but then it mostly runs by itself.[1] Production lines with lots of stations will have shutdowns, where a drill bit broke or there's dirt on a lens or some consumable ran out. Exceptions are hard to automate, and factory design focuses on minimizing exceptions and bypassing stuck cells.
It's helpful to understand how a factory works when watching how software development is changing.
There's commonality.
So the phrase "vibe coding" is only two months old.[2] How widespread will it be in two years?
AI is unlikely to take away jobs from software engineers. There’s no natural upper bound on the amount of software people can consume - unlike cars, food or houses.
Software engineers ultimately are people with “will to build”. Just as hedge fund people have a “will to trade”. The code or tooling is just a means to an end.
It depends on the thorny bug. I like fixing bugs where the solution is to implement something clever and I learn something in the process. I don't like fixing bugs where I forget a comma or do a subtle one-off error.
Most thorny bugs fall into the latter in my experience.
I don't think that's the split being referred to. Some people (like myself) see code as a way to achieve a goal, and some people like writing code for the puzzle of it.
I’ve been having a different experience. Asking Claude to fix the bug again and again is annoying, so I’m still working on “pull pieces at a time, understanding each” so I do fix the bug myself when it’s faster to do so. In fact, the majority of times I’ve been using the LLM to build tiny libraries for me to avoid the need for the LLM in the running app. Kind of like StackOverflow on steroids. I don’t feel as the glue, but only having a superior tooling to get info I need fast.
I'm still pretty pessimistic on all this. Just today, I had what should have been an obvious win for an LLM coding assistant to help me. I was writing a go function that converts one very long struct into a second very long struct. The transformation was almost entirely wrapping the fields of the first struct in a wrapper in a completely rote way. If FieldA was an int on src, I wanted a dest{ FieldA: Wrapper{ Value: src.FieldA, Ratio: src.FieldA/Constant }, ... }.
It couldn't do it. I prefilled in all the fields (hundreds) and told it just to populate them, but it tried to hallucinate new fields, it would do one or two then both delete the fields I had added and add a comment saying 'then do the rest'. I tried a bunch of different prompts.
I can see how some vibe coders could make useful things, but most of my attempts to use LLMs in anything not-from-scratch are exercises in frustration.
Can we please make it a convention that whenever anybody posts anything about some LLM experience they had, that they include which model and UI driving it they used?
Parent's post is like saying: I tried to send an email with a new email program and it didn't work.
There is a story about this by Stanislaw Lem: "Elsewhere Tichy meets a race of aliens (called "Indioci" in the Polish original, "Phools" in the English translation) who, desiring perfect harmony in their lives, entrust themselves to a machine, which converts them into shiny discs to be arranged in pleasant patterns across their planet." - https://en.m.wikipedia.org/wiki/Ijon_Tichy#Stories
Nothing stopping anyone fixing thorny bugs for fun! And hobby computing is more accessible now than ever.
If you build stuff for others AI (mostly) removes typing and debugging from the equation; that frees you to think harder about what you’re building and how to make it most useful. And because you’re generally done sooner you can get the thing into your users’ hands sooner, increasing the iterations.
So there's a wrong way and a right way to code with LLMs. The wrong way is to ask the LLM to write a bunch of code you don't understand, and to keep asking it to write more and more code to fix the problems each iteration has. That will lead to a massive tower built on sand, where everything is brittle and collapses at the slightest gust of wind.
The right way is to have it autocomplete a few lines at a time for you. You avoid writing all the boilerplate, you don't need to look up APIs, you get to write lines in a tenth of the time it would normally take, but you still have all the context of what's happening where. If there's a bug, you don't need to ask the LLM to fix it, you just go and look, you spot it, and you fix it yourself, because it's usually something dumb.
The second way wins because you don't let the LLM make grand architectural choices, and all the bugs are contained in low-level code, which is generally easy to fix if the functions, their inputs, and their outputs are sane.
I like programming as much as the next person, but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more. Especially something like Cursor makes this much easier, because it can autocomplete as you type, rather than in a disconnected "here's a diff you won't even read" way.
The best completion are those that let you avoiding mistyping variable names or figure out some dependency (automatically import the modules, restrict to the current scope/structure). Those has been solved for decades now. And you can get a dumb ones by doing a list of all symbols in the project directory, removing common keyword and punctuation, and do some kind of matching for filtering. The other side of the spectrum is the kind of code indexing IDEA and LSP server do.
Then you got into code boilerplate, and if you find yourself doing this a lot, that's a signal to start refactoring, add some snippets to your editor (error handling in go), write some code generators, or lament the fact that your language can't do metaprogramming.
> but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more.
That's a reckless attitude to have, especially if the function have drastic behavior switch like mutating argument or returning a fresh copy. All you do is assume that it behaves in certain way while the docs you haven't read will have the related warning label.
Hot take: we were already glue. We take in ideas / directives from product people and turn that into instructions for a computer to use to build a software package.
The only difference in a “vibe coding” world is that now these “instructions” that we pass to the computer are in English, not Java.
Really interesting that I have so far had hardly any use for code generators except when some glue was needed. Possibly this new revolution may be headed in multiple conflicting directions simultaneously?
Well written, I agree with the basic premise of the idea, I just think the changes will be even more dramatic.
A lot of us are stationary, thinking stuff and other people around us will be automated, but not us, “I am special”, well I fear a lot of people will find out just how much special they unfortunately are (not).
I would highly caution against recommending DMT to random strangers. It is not for the faint of heart and it is also nowhere near a magic fix-all. Also, its routes of administration mostly suck (smoking/vaping or MAOIs).
Artificial intelligence (AI) and psychedelic medicines are among the most high-profile evolving disruptive innovations within mental healthcare in recent years. Although AI and psychedelics may not have historically shared any common ground, there exists the potential for these subjects to combine in generating innovative mental health treatment approacheshttps://nyaspubs.onlinelibrary.wiley.com/doi/10.1111/nyas.15...
That application of psychedelics is almost entirely dissimilar to what you are proposing. While a lot of the emerging research in the field is nice (and long overdue in my opinion), I would be absolutely floored if any of it recommended DMT for the treatment of dissenting opinions.
It's not like I just also could use some DMT either, since I already use psychedelics on a damn near weekly basis. It's just not something that you should be telling random strangers to try. It works differently on everyone, is not always helpful, and can even be harmful.
Really enjoyed this post.
> Putting aside existential risks, I don't see a future where a lot of jobs don't cease to exist.
I'm personally betting on the plateau effect with LLMs. There are two plateaus I see coming that will require humans to fix no matter what we do:
1. The LLMs themselves plateau. We're already seeing new models get worse, not better at writing code (e.g., Sonnet 3.5 seems to be better than 3.7 at coding). This could be a temporary fluke, or, an inherent reality of how LLMs work (where I tend to land).
2. Humans will plateau. First, humans themselves will see their skills atrophy as they defer more and more to AI than struggling to solve problems (and by extension, learn new things). Second, humans will be disincentivized to create new forms of programming and write about them, so eventually the inputs to the LLM become stale.
Short-term, this won't appear to be true, but long-term (on the author's 10+ year scale), it will be frightening. Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways that the few remaining humans responsible for maintaining them don't understand (and can't prompt their way out of).
And that's where I think the future work will be: in fixing or replacing systems unintentionally being broken by the use of AI. So, you'll either be an "AI mess fixer" or more entrepreneurial doing "artisan, hand-crafted software."
Either of those I expect to be fairly lucrative.
Reminds me of these stories (the Asimov one I've posted before):
- "Profession", by Isaac Asimov: http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
- "Pump Six" by Paolo Bacigalupi (the story of that title)
On your second point - I don’t agree that humans in general will plateau. I think instead the _gap_ between humans who crave to create and learn, and those who are ostensibly potatoes, will be magnified.
I see it a bit like the creator economy, where you have these maker vs consumer tranches of people.
Humans are fundamentally creators of tools and art and imaginary world - this is one of the factors that distinguishes us from animals. For most of human history, most humans spent a lot of their time creating. The very recent phenomena of a large fraction of humans creating almost nothing in their adult life is caused by modern economic systems, almost all of which force people to work so much every day that all their creative energies are sapped.
Go to a 4 day 24 hour work week and you will find almost everyone creating again.
Throughout all stages of my schooling from kindergarten to college, there were always artists, tinkerers and builders, but they were never the majority.
I can't think of any time in history, under any economic system, where humans who create tools and art were the majority. Most people just want to have families and enjoy life.
talk to people, pretty much everybody around you wants to make stuff.
be it music, cooking, drawing, house arrangement, get better at fishing, etc.
people with long gaps of "pure consumption" get depressed... (which is why i think it's important to protect kids from mobile phones, they're depression machines)
> eventually the inputs to the LLM become stale.
Seems plausible to me that they could just keep writing python 3.13 till the end of time.
If you take say assembly - we didn’t stop writing it because it stopped working.
As a functional building block programming seems feature complete
"As a functional building block programming seems feature complete"
This might be one of the more fascinating things I've read in a long time. Care to expand upon? Would be genuinely curious.
Afraid there is no deep revelation lurking there.
All I meant is that programming seems reasonably protected against going LLM stale by virtue of being low level and malleable
This feels like a "if i say it enough, people will agree and it will be true" kind of comment. Almost none of these propositions check out or even make sense. I literally can't distinguish between reddit commenters and HN commenters. An unoriginal HN complaint but frustrating to witness over time.
1. Plateau != Regress. Why point to regressions as evidence of plateau? Why only look at a single model and minor version? we are clearly still at AI infancy, regressions are to be expected from time to time.
2. Where's the evidence of this? Humans are using AI to branch out and dip their toes into things that they wouldn't have fathomed doing before. How would that lead you to "disincentivized"?
> Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways
So in this fantasy everybody is vibe coding resilient code/systems that lasts for 10+ years and everybody stops learning how to code, and after a decade or so, they start breaking and everybody is in trouble? This world you're creating wouldn't stand up to the critique of sci-fi readers.
I'm sorry but if we can vibe code systems that last 10+ years and nobody is learning anything because they are performing so well, then that's a job well done by OpenAI and co. We're probably set as a civilization.
That’s an uninformed perspective. We can’t be ”set as a civilization” by locking in whatever our current progress is. ML models don’t inherently progress anything. So yes if we stop doing that then in 10 years people will not have stopped learning how to code, but possibly stopped actually thinking for themselves which in turn would lead to not progressing neither our ML models or civilization.
Like I said, in the short-term this will sound false, but in the long-term I expect it to be frighteningly accurate.
> I literally can't distinguish between reddit commenters and HN commenters.
No need to condescend. I have a fair amount of experience building with and using these tools daily. I'm not just some "reddit idiot."
> So in this fantasy everybody is vibe coding code that lasts for 10+ years and everybody stops learning how to code
I'm extrapolating. Look at what happened in the wake of the industrial revolution. Most people don't know how to fix or create anything today, and instead, rely on fast-and-cheap products or services made by or offered by other people. Hence the panic over China and tariffs. The AI-ification of everything is just another, modern version of a similar thing.
I could absolutely be wrong (and hope I am). But when you track human laziness over time, it leads to deterioration and incompetence. I view this as a gradually than all of a sudden type of problem. One that will be incredibly difficult to dig ourselves out of later.
> I'm extrapolating. Look at what happened in the wake of the industrial revolution. Most people don't know how to fix or create anything today
Most people didn't know how to fix or create anything back then either. Except now we have more productivity than ever, more people working than ever, more output than ever.
People are fixing and creating out the ass in this society. We might not all be factory workers but people are making a ton of things in general. There is more music being made than ever, more movies being made than ever, more small businesses etc.
There is more information about how to fix things disseminated to the general population now than ever. It's just that what we build now is often so incredibly complex that fixing it is non-trivial or impractical. That's not a regression of society or our abilities or interests. There are videos on tiktok about fixing electric toothbrushes that have over 100k views and over a thousand comments. https://www.tiktok.com/@thetruestreviews/video/7458130570321...
None of what you say checks out and starting off with what is basically "it doesn't make any sense now but i predict in 10 years it will make sense" is a lazy way to defend your point.
> But when you track human laziness over time, it leads to deterioration and incompetence.
Again, none of this tracks with reality. Who is tracking laziness? In your world the general population is lazy and incompetent yet we are generally producing more and still advancing STEM
You are not wrong and that is a rational and a plausible forecast.
Lazy comments like this shouldn't be accepted in this community. If you're going to post something at least explain why you think it is such. What is rational, what is plausible about their comment?
The thing about being an “AI mess fixer” will be that you’ll still need experience that fuels the creativity to solve problems generated by the AI.
Yup. The people who fit this role well will be the types that do this work for fun anyways, purely out of enjoyment or curiosity. I don't expect those types to completely disappear, they'll just be incredibly rare (i.e., the Pareto Principle or some bastardization of it).
Why do articles like this always say things like "I've used LLMs to get some stuff done faster" and then go on to describe how LLMs get them to spend more time and money to do a worse job? You don't need LLMs to frustrate you into lowering your standards, the power to do that was within you all along.
Has anyone actually measured this yet?
Much of this feels like when they did studies on people who take mushrooms for example feel like they are more productive, but when you actually measure it they aren't. It's just their perception.
To me the biggest issue is that search has been gutted out and so for many questions the best results come from asking an LLM. But this is far different from using it to generate entire codebases.
No, there’s been a couple of attempts but no one would call those outcomes conclusive.
The "glue" comment here reflects a view from someone who does mostly software work. That's been the situation since mechanized production lines were first built. The job of the humans is not direct labor. It's to monitor the machinery, restart it, and fix it.
Power looms were probably the first devices like this. Somebody has to thread the loom, but then it mostly runs by itself.[1] Production lines with lots of stations will have shutdowns, where a drill bit broke or there's dirt on a lens or some consumable ran out. Exceptions are hard to automate, and factory design focuses on minimizing exceptions and bypassing stuck cells.
It's helpful to understand how a factory works when watching how software development is changing. There's commonality.
So the phrase "vibe coding" is only two months old.[2] How widespread will it be in two years?
[1] https://www.youtube.com/watch?v=WyRW9XOuUdU
[2] https://en.wikipedia.org/wiki/Vibe_coding
AI is unlikely to take away jobs from software engineers. There’s no natural upper bound on the amount of software people can consume - unlike cars, food or houses.
Software engineers ultimately are people with “will to build”. Just as hedge fund people have a “will to trade”. The code or tooling is just a means to an end.
Huh, I have the opposite feeling- that people already have most of the software they want a this point.
Think about your browsing history over the last year. You’ve probably consumed an obscene amount of React components, maybe millions.
Your car has way more code than a decade ago and so does your TV.
These things might make you miserable but it’s still “demand” in the economic sense of the term. It keeps developers employed.
>Think about your browsing history over the last year. You’ve probably consumed an obscene amount of React components, maybe millions.
Yeah, most of which are rehashes of the same thing, and most of them on the same ~10 websites.
640k should be enough for anybody
"I like fixing thorny bugs". Not me. Any tool that can get me to the solution faster is always welcome. IME, AI does well handling the boring parts.
It depends on the thorny bug. I like fixing bugs where the solution is to implement something clever and I learn something in the process. I don't like fixing bugs where I forget a comma or do a subtle one-off error.
Most thorny bugs fall into the latter in my experience.
And both are valid. Some people like building new products and features, some would rather fix existing ones.
I don't think that's the split being referred to. Some people (like myself) see code as a way to achieve a goal, and some people like writing code for the puzzle of it.
I’ve been having a different experience. Asking Claude to fix the bug again and again is annoying, so I’m still working on “pull pieces at a time, understanding each” so I do fix the bug myself when it’s faster to do so. In fact, the majority of times I’ve been using the LLM to build tiny libraries for me to avoid the need for the LLM in the running app. Kind of like StackOverflow on steroids. I don’t feel as the glue, but only having a superior tooling to get info I need fast.
I'm still pretty pessimistic on all this. Just today, I had what should have been an obvious win for an LLM coding assistant to help me. I was writing a go function that converts one very long struct into a second very long struct. The transformation was almost entirely wrapping the fields of the first struct in a wrapper in a completely rote way. If FieldA was an int on src, I wanted a dest{ FieldA: Wrapper{ Value: src.FieldA, Ratio: src.FieldA/Constant }, ... }.
It couldn't do it. I prefilled in all the fields (hundreds) and told it just to populate them, but it tried to hallucinate new fields, it would do one or two then both delete the fields I had added and add a comment saying 'then do the rest'. I tried a bunch of different prompts.
I can see how some vibe coders could make useful things, but most of my attempts to use LLMs in anything not-from-scratch are exercises in frustration.
Which one?
Can we please make it a convention that whenever anybody posts anything about some LLM experience they had, that they include which model and UI driving it they used?
Parent's post is like saying: I tried to send an email with a new email program and it didn't work.
VS Code + Github Copilot (Editor Inline Chat) + GPT-4o
There is a story about this by Stanislaw Lem: "Elsewhere Tichy meets a race of aliens (called "Indioci" in the Polish original, "Phools" in the English translation) who, desiring perfect harmony in their lives, entrust themselves to a machine, which converts them into shiny discs to be arranged in pleasant patterns across their planet." - https://en.m.wikipedia.org/wiki/Ijon_Tichy#Stories
(Not glue, but close enough.)
Nothing stopping anyone fixing thorny bugs for fun! And hobby computing is more accessible now than ever.
If you build stuff for others AI (mostly) removes typing and debugging from the equation; that frees you to think harder about what you’re building and how to make it most useful. And because you’re generally done sooner you can get the thing into your users’ hands sooner, increasing the iterations.
It’s win-win.
> I don't see a future where a lot of jobs don't cease to exist.
And the complete lack of a game plan of a societal level is starting to get worrying.
If we’re going to UBI this then we’re going to need a bit more of a plan than some toy studies
these well articulated articles will soon turn into pure despair. Happened to me.
But I thought it was going to turn us into paper clips.
On the plus side, at least when I'm old and not so mentally sharp, my personal AI can tell me when I'm being scammed or why the wifi isn't working.
You really don't believe it will be the one scamming you?
So there's a wrong way and a right way to code with LLMs. The wrong way is to ask the LLM to write a bunch of code you don't understand, and to keep asking it to write more and more code to fix the problems each iteration has. That will lead to a massive tower built on sand, where everything is brittle and collapses at the slightest gust of wind.
The right way is to have it autocomplete a few lines at a time for you. You avoid writing all the boilerplate, you don't need to look up APIs, you get to write lines in a tenth of the time it would normally take, but you still have all the context of what's happening where. If there's a bug, you don't need to ask the LLM to fix it, you just go and look, you spot it, and you fix it yourself, because it's usually something dumb.
The second way wins because you don't let the LLM make grand architectural choices, and all the bugs are contained in low-level code, which is generally easy to fix if the functions, their inputs, and their outputs are sane.
I like programming as much as the next person, but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more. Especially something like Cursor makes this much easier, because it can autocomplete as you type, rather than in a disconnected "here's a diff you won't even read" way.
The best completion are those that let you avoiding mistyping variable names or figure out some dependency (automatically import the modules, restrict to the current scope/structure). Those has been solved for decades now. And you can get a dumb ones by doing a list of all symbols in the project directory, removing common keyword and punctuation, and do some kind of matching for filtering. The other side of the spectrum is the kind of code indexing IDEA and LSP server do.
Then you got into code boilerplate, and if you find yourself doing this a lot, that's a signal to start refactoring, add some snippets to your editor (error handling in go), write some code generators, or lament the fact that your language can't do metaprogramming.
> but I'm really not lamenting the fact that I don't have to be looking up parameters or exact function names any more.
That's a reckless attitude to have, especially if the function have drastic behavior switch like mutating argument or returning a fresh copy. All you do is assume that it behaves in certain way while the docs you haven't read will have the related warning label.
That level of autocomplete has been around for many years before LLM for pretty much any used language
Well, I guess I must have missed it in my thirty years of development.
Use IDEA for a java project and your TAB key will wear itself out.
Nothing about horses?
Very well-articulated article on a shared feeling!
Hot take: we were already glue. We take in ideas / directives from product people and turn that into instructions for a computer to use to build a software package.
The only difference in a “vibe coding” world is that now these “instructions” that we pass to the computer are in English, not Java.
Not entirely, because the snippets you get when vibe coding are derived from actual coding.
Really interesting that I have so far had hardly any use for code generators except when some glue was needed. Possibly this new revolution may be headed in multiple conflicting directions simultaneously?
Well written, I agree with the basic premise of the idea, I just think the changes will be even more dramatic.
A lot of us are stationary, thinking stuff and other people around us will be automated, but not us, “I am special”, well I fear a lot of people will find out just how much special they unfortunately are (not).
Guys like these need dmt. srlsly.
I would highly caution against recommending DMT to random strangers. It is not for the faint of heart and it is also nowhere near a magic fix-all. Also, its routes of administration mostly suck (smoking/vaping or MAOIs).
Lammers should try anyway.
Abstract
Artificial intelligence (AI) and psychedelic medicines are among the most high-profile evolving disruptive innovations within mental healthcare in recent years. Although AI and psychedelics may not have historically shared any common ground, there exists the potential for these subjects to combine in generating innovative mental health treatment approacheshttps://nyaspubs.onlinelibrary.wiley.com/doi/10.1111/nyas.15...
That application of psychedelics is almost entirely dissimilar to what you are proposing. While a lot of the emerging research in the field is nice (and long overdue in my opinion), I would be absolutely floored if any of it recommended DMT for the treatment of dissenting opinions.
It's not like I just also could use some DMT either, since I already use psychedelics on a damn near weekly basis. It's just not something that you should be telling random strangers to try. It works differently on everyone, is not always helpful, and can even be harmful.
Well, this field would not advance without experiments either https://scitechdaily.com/scientists-flip-two-atoms-in-lsd-an... https://www.amazon.com/AI-DMT-Simulating-Experience-through/...
Not like this.
[dead]