And don’t forget human relationships. My wife won’t talk to me anymore and it’s because I spend all of my time online talking about how AI is ruining things.
I mean you joke(I hope) but I know people who literally run their entire relationship and all their communication through ChatGPT/Gemini. Whether it's breaking or improving the relationship - I guess the verdict is not out yet, but I suspect it's not great long term .
The fact that whether or not doing something like this (which is bizarre, incredibly unhealthy, and just not how humans are wired socially) is considered at all in the AI space and met with a "oh well, we'll wait and see long-term" is sort of a microcosm of how out of touch the push behind LLMs is. Anthropic, OpenAI, etc have thrust a poorly functioning product with unhealthy obsequiousness and a clever obfuscative instinct to hide its numerous limitations upon a legion of unsuspecting normal people who mistake its cleverness for true wit and insight. And now these people are blowing up their relationships, their passions, and their hobbies, and for what? What, actually, would routing your texts through Gemini or ChatGPT possibly do for your relationship? Is the onus not on us when we individuate and socialize to take it upon ourselves to learn how to communicate our feelings and emotions with each other? What sort of Kafkaesque absurdity are we living in?
I suppose Zizek predicted all of this years ago with his little anecdote about how in the future, I paraphrase, but he suggested even sex will be outsourced to technology; perhaps on a date one will purchase an artificial phallus and the other a male pleasure item, and the two will sit on the floor and watch their pleasure objects mating with one another. That's about how absurd this reality is that the genAI pushers seek to impose upon the world.
Quote from a VP at a big tech megacorp a few months back:
> "If we don't start using this technology every day in every aspect of our jobs we will be left behind and never catch up."
I'm gonna get that one embroidered and framed on the wall above my toilet so for the rest of my life every day I can look at it and chuckle at the memory of how broken people were before the bubble popped.
I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers (which given the current curve of improvements despite what the naysayers of human intellect might suggest, is unfeasible).
Accountants didn't die off when calculators came on the scene. In no scenario is an LLM a drop-in replacement for any career field the same way CAD was a drop-in replacement for draftsmen -- and even then, draftsmen are still around today, in slightly smaller numbers, doing CAD drafting and design rather than using raw pen-and-paper skills.
Claude and Codex are exceptionally useful for reducing workload and improving productivity. But that's all they are. They're calculators replacing the slide-rule, drafting-esque drudgery of typing out all your code by hand. So why not market them like that? As helpers, assistants, tools to enable you to do things better and more efficiently? Which, in my usage of them, is what they're really only good at. Instead, there's been a mad rush to shoehorn agents and LLMs and genai into everything, outlandish claims like GPT writing better than Hemingway and Ginsberg, and creating absurd tools like Grok or Sora that are fundamentally broken, don't work well, and have flooded the internet with noise and disgusting slop.
And in all of this, they've created a cancerous gold rush that threatens to wipe out the entire economy when the jig is up and people realize how useless these claims are, and that at the end of the day, it's a fancy search engine, a calculator, that can think a little better and reason more than the ones of old.
It really feels like all of these CEOs are just borderline running a cult at this point.
> CAD was a drop-in replacement for draftsmen -- and even then, draftsmen are still around today, in slightly smaller numbers
You’re off my an order of magnitude here. Even ignoring departments that were significantly downsized, you would need substantially more draftsmen to do the work currently done by a smaller number with AutoCAD.
The skill required is also much lower, doubly so if you consider Solidworks/Inventor. You get everything for free; design the 3D model and your projections are free.
> I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers
Because labor is the largest line item in almost every software company on Earth. Executives' primary KPI is their market cap, so convincing investors that your profit/expense ratio is going to 2x in 6 months when you finally get full LLM adoption is an excellent way to juice your performance metrics, and thereby your bonus (mutatis mutandis for various finacial setups).
But then what about the fact that it's exposing so many firms to immense risk and essentially straight-up lying to investors as well as product adopters? No one thought of that reality, when the chickens finally come home to roost?
My read is that it's a mix of tech firms having overhired a lot during the ZIRP+COVID era, as well as executives having a pretty short horizon for risk if the potential bonus is large enough.
There's an easy fix, you start hiring again. Probably don't even have to explain it, what happened just before was that you improved corporate finances by lowering your labour expenses, i.e. you're clearly growing and hence you need more people.
lack of regulation in the VC space? I mean, in order to get these vast sums of money, they have to make all these sky high claims, but I feel like in the old days, someone would get at least a wrist slap for defrauding investors
I mean that's certainly part of it, but Altman's grotesque comments today about the idea of raising a child being "more inefficient" than training an AI model there's something deeper, darker, psychologically I think; that the VC people are fundamentally misanthropic and antisocial and despite AI not really fulfilling their desire for a world where humans are entirely fungible they want to sell it that way as a sort of bizarre wishcasting. It's just incredibly odd.
> I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers
> It really feels like all of these CEOs are just borderline running a cult at this point.
Because the people at the top of these companies have absolutely no idea about how the average human goes through life or what a normal human life even is. They don't know what having a job means, what a job is, what it means to be in charge of a family, to struggle for basic things, &c. Look at zuck presenting his ridiculous image gen ai [0], or their embarrassing "Uh HeLp Me MaKe A SaUcE FoR My KoReAn SaNdWiCh" [1], or his Wii tier metaverse that no one above the age of 13 found remotely interesting, this is what these people spent hundreds of billions on, that's what they dream about, that's the future they want even though 90% of the population does not give a single shit about it,
And then you have altman and his very unsettling takes on all kind of topics like "ai will develop bioweapons in 2027 but ai is also the solution to this problem", "humans use too much energy" or "I cannot imagine having gone through figuring out how to raise a newborn without ChatGPT” no shit my dude, a gay man who never worked a day in his life, exit scammed his way to the top and who paid for someone to incubate their offspring have no clue about what evolution should have encoded in his DNA over 300m+ years? and we have to give him $7 trillion to speed run the next stage of evolution, lol, lmao even...
Ah, and they need to raise trillions of dollars, literally, that's why they keep mentioning outrageous (but very profitable) things like curing cancer, skynet, terraforming mars and solar powered satellites datacenters even though none of these things make any fucking sense. They need the next """hypergrowth""" vector, one more scam before we eventually reach the point of no return, it's all greed and FOMO as always. One day they shill self driving cars, the next bitcoin, the next AI, always full of "in two years it'll be amazing we promise, I can't explain how or why but give me a few trillions", meanwhile it's going downhill fast for everyone outside of these echo chambers
That marketing has been used to fire a lot of people, I think that's an important reason why it has that character. Then there were some true believers too, who were obviously important to the people firing other people.
Being able to remove a lot of people from your large work force and have other corporations do it too is quite profitable, on average it pushes down the price of labour and you'll rehire some of them and replace some of the others, and perhaps your organisation managed to become more efficient at the stuff that make you money in the short run too.
Then there was that thing with the change in US tax code, where R&D became more expensive some years ago.
the real problem isn't bad code getting merged, it's what happens after. when a human writes a module, they carry mental context that makes them the right person to fix subtle bugs 6 months later. AI-generated PRs have no owner in that sense -- they're orphaned at merge. the forge gatekeeping idea is interesting but i think the actual gap is who answers the pager at 2am for code nobody understands.
You overestimate my ability to keep mental context for 6 months.
And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.
The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.
This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."
It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.
fair point on the memory -- humans forget too. but the rubber stamping problem is actually worse with AI-generated code because when something breaks, there's no single human who wrote it and has to live with the consequences. PRs have had low review quality for a while, agreed. the difference is accountability gradient -- a human author who pushed slop has at least some skin in the game when prod goes down.
Often I see youtube videos that sells an overwhelmingly negative take on AI, like "OpenAI" fails 93% of Jobs or "AI is destroying the world" and other weirdly outlandish titles that is clearly aimed at clickbait.
Watching these content I often get confused because it never seems to highlight the actual real world progress and use that LLMs in particular gets for coding.
Much of what was "vibe coding" is becoming just coding now. This means for open source, we are no longer relying on companies that create "opencore" products that nerf/neglect the public version so they can sell their cloud product. We don't have to worry about a maintainer going AWOL on some Clojure or Elixir library and fret about hiring someone who has "20 years of experience". We don't need to pay for a lot of expensive enterprise SaaS tools that charge six digits when we can simply use LLM to internalize existing packages and even create our own.
Those that have been using coding agents for the past 6 month know how much progress there have been and the sheer pace of it to know that we are about to turn the corner, especially as new forms of computing are in the pipelines that will scale even faster without incurring more energy, moving away from text token gen to something else that humans can't read etc.
While it's important to watch different takes, I think someone who consumes only Youtube and these videos that the algorithm is designed to push is going to be shocked and left behind because by the time these videos are produced, things have already progressed or in state of change. All in all, these videos should be treated like ephemeral commentary that ultimately loses their relevance due to the sheer speed of how things are changing.
You make some fair points here, but I'm unclear as to whether you're claiming Geerling's video is an example of an "overwhelmingly negative take on AI".
he raises some good stats and examples of "slop" and dooming around code gen hitting a ceiling coupled with humans being bottlenecked all very true statements but overall lacking in showing that actually lot of people are using AI coding gen in open source with good success.
btw am I talking to a bot? all of your comment history is pretty similar with lot of emdash.
What kind of open source projects have you been working on? Why do you think most serious open source project reject AI PRs? Do you think you will spin up your own VLC, linux kernel or tensorflow in an afternoon and that you won't need "maintainers"? What about security? Accountability? Group work?
The point of the video is to highlight how the inundation of AI-generated pull requests is harming open source. It doesn't say anything about AI success/failure rates, and it wouldn't make sense for it to go into details about that. However, it does mention that LLMs are useful for some things.
Because you're hiding the vast majority of facts, like the fact that we already used every last drop of data available, the fact that they spent hundreds of billions on it and no one makes money besides the people selling them GPUs and memory chips, the fact that contrary to most tech these hings aren't getting cheaper to create, &c.
I suggest looking at some of the recent AI research, as it's not as dead of a field as you imply. Everyone understands that there's no more text and video/image data from the web. There's massive amounts of data outside of this, especially around the context of replacing jobs (think robots). With the last couple years, I don't see any reason to believe today's LLM architectures or training methods are the last to be developed. I think it's really crazy to imply that progress is somehow over.
I'm not sure how money spent is relevant. How many humans are left jobless in the next 20 years should be the concern. If it's 7% in only 5 years, with the very safe assumption that there will be progress, it's still not looking good.
All I'm saying is the hype around these things is always 10x what the reality ends up being, every single time. Autonomous cars, bitcoin, VR, metaverse, it's always the same story, a loooooot of hype, a bit of actual delivery, and move on to the next sca...project
Where does the 7% number come from actually? no one knows... where are the hundred millions of unemployed people? no one knows, where is the productivity increase? no one knows. It moves fast and their shoveling a lot of shit down our throats, that I agree with, I'm just not seeing any of the magic or "AGI in two weeks my dudes" type of things.
Not all hype is the same. Internet was hyped. Automobile ("faster horses please"). Telephone ("we have messengers on horses already"). Radio & Television ("nobody is gonna sit in front of a box all day!"). Computers ("nobody wants these clunky loud things in tehir home"). Smartphones ("no keyboard its useless).
None of these industries had weirdos asking for $7 trillion to cure every disease and solve every problem known to man, they all grew organically over decades before finding their uses without being actively forced into every aspect of your life 2 years after the mvp was released.
In all of those cases capital chased a future reality, railroad, electricity, internet all had massive speculation. the dot com bubble came and went but internet stayed.
$7 trillion looks cartoonish due to inflation, you'd have to normalize it with other economic numbers. Large companies are funding capex with bonds that still within healthy margins per employee head.
> to cure every disease and solve every problem known to man
not sure why you view this so negatively, that would be absolutely worth every penny. Granted there is a lot of noise too but dismissing everything because of the volume is premature. Technological shifts rarely produce catastrophic labor supply/demand shocks that last, it changes how we work and the market adjusts.
You are conflating the hype layer typical with all major technological breakthroughs and the subsequent capability that gets baked into humanity as a result.
What does AGI have to do with it? Look up the top employer for your state [1] (or top few). If you go with the definition that AGI allows novel ideas/innovation, you'll see AGI is not needed for most jobs.
The term was first used by psychoanalyst A. A. Brill when describing the natural desire for women to smoke and was used by Edward Bernays to encourage women to smoke in public despite social taboos. Bernays hired women to march while smoking their "torches of freedom" in the Easter Sunday Parade of 31 March 1929,[1] which was a significant moment for fighting social barriers for women smokers.
Bernays is widely seen as the father of modern marketing, and helped lay the foundation for the consumer-based economy.
My guess is that this is going to be everything other technology that's democratized. You see a flood of low quality output because you have a lot of new non-technical devs. Some of these are good enough to crowd out some of the prexisting tools. The volume creates noise which also makes the good stuff harder to find. Eventually an ecosystem starts forming around these low hanging products which fill the gaps between pros and amatures (think of what happened to video editing and Apple). Eventually you have more people creating a better product in the long run. There is a bit of a feedback loop here as AI gets better, it makes the products it outputs better, which inturn can benefit AI as it learns from improvements.
I wonder if we'll reach a breaking point with public forges, where they'll simply reject hosting a repo if it isn't from someone with a vetted background or if it detects hallmarks of LLM slop (e.g., many commits over a short period of time or other LLM tells).
GitHub recently added new repository settings to turn off pull requests or limit them to approved contributors. The announcement doesn't mention AI agents, but that's certain relevant.
GH also needs to find a way to stop AI scraping of IP.
(Or not. It might be lucrative to host some novel algorithm on GH under a license permitting its use in generative LLM results, at a reasonable per-impression fee.)
I think there'll be space for curated forges at some point but they're going to live on the margins like most self-hosted repos do.
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
I think that AI will do the vetting of repos - just as humans do that now. Perhaps AI will do a better job. The only way we're gonna fight AI slop is with AI.
The real problem is that AI doesnt make any money. In fact, AI companies and Buisiness units hemmorage cash. When AI is eventually priced to the market cost the use-case for this all collapses.
Personally I agree with the alternative opinion that it will be a golden age. I'm embarking on a project that involves refactoring something I did 18 years ago. I'm assuming that it'll take 1/10 the time to make a much better modern version with the assistance of LLMs.
OpenClaw Peter is using codex to analyze/de-duplicate PRs, extract good ideas from them and then re-implement them.
> I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it.
Some people bitch, others are real engineers solving novel problems.
I know someone who started making a game by building his own engine. 5 years later he had made half an engine and zero games made on it.
Most of the people I know that are into herding AI spend most of their time doing that, but I can't say I've seen them accomplish much more than other colleagues, even the ones just using built-in AI or copy pasting code from an AI chat.
> Some people bitch, others are real engineers solving novel problems
My most disliked thing about AI so far isn't AI itself, it's how nasty AI evangelists behave when it's criticized. You don't have to attack and/or insult people, you could have just left out that last bit.
The aesthetics of an argument is not the argument.
It's actually sickening that you are defending billionaire's toys, which make work for people already working for free; AIs are constructed from the illegal and unethical expropriation of labor, here and abroad.
Invoking the idea that it is classist or racist to reject yet another transparent power grab by the Epstein class against labor is maximum peasant brain.
That and it's also destroying the environment, trust, truth, creativity, people's ability to afford better computing equipment and so on.
don't forget the economy too
no labour? no demand to buy products/services
And don’t forget human relationships. My wife won’t talk to me anymore and it’s because I spend all of my time online talking about how AI is ruining things.
Just get an AI wife so you can then spend 100% of your time online talking to bots.
*Posted by my ClawdBot Agent
I mean you joke(I hope) but I know people who literally run their entire relationship and all their communication through ChatGPT/Gemini. Whether it's breaking or improving the relationship - I guess the verdict is not out yet, but I suspect it's not great long term .
The fact that whether or not doing something like this (which is bizarre, incredibly unhealthy, and just not how humans are wired socially) is considered at all in the AI space and met with a "oh well, we'll wait and see long-term" is sort of a microcosm of how out of touch the push behind LLMs is. Anthropic, OpenAI, etc have thrust a poorly functioning product with unhealthy obsequiousness and a clever obfuscative instinct to hide its numerous limitations upon a legion of unsuspecting normal people who mistake its cleverness for true wit and insight. And now these people are blowing up their relationships, their passions, and their hobbies, and for what? What, actually, would routing your texts through Gemini or ChatGPT possibly do for your relationship? Is the onus not on us when we individuate and socialize to take it upon ourselves to learn how to communicate our feelings and emotions with each other? What sort of Kafkaesque absurdity are we living in?
I suppose Zizek predicted all of this years ago with his little anecdote about how in the future, I paraphrase, but he suggested even sex will be outsourced to technology; perhaps on a date one will purchase an artificial phallus and the other a male pleasure item, and the two will sit on the floor and watch their pleasure objects mating with one another. That's about how absurd this reality is that the genAI pushers seek to impose upon the world.
Quote from a VP at a big tech megacorp a few months back:
> "If we don't start using this technology every day in every aspect of our jobs we will be left behind and never catch up."
I'm gonna get that one embroidered and framed on the wall above my toilet so for the rest of my life every day I can look at it and chuckle at the memory of how broken people were before the bubble popped.
I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers (which given the current curve of improvements despite what the naysayers of human intellect might suggest, is unfeasible).
Accountants didn't die off when calculators came on the scene. In no scenario is an LLM a drop-in replacement for any career field the same way CAD was a drop-in replacement for draftsmen -- and even then, draftsmen are still around today, in slightly smaller numbers, doing CAD drafting and design rather than using raw pen-and-paper skills.
Claude and Codex are exceptionally useful for reducing workload and improving productivity. But that's all they are. They're calculators replacing the slide-rule, drafting-esque drudgery of typing out all your code by hand. So why not market them like that? As helpers, assistants, tools to enable you to do things better and more efficiently? Which, in my usage of them, is what they're really only good at. Instead, there's been a mad rush to shoehorn agents and LLMs and genai into everything, outlandish claims like GPT writing better than Hemingway and Ginsberg, and creating absurd tools like Grok or Sora that are fundamentally broken, don't work well, and have flooded the internet with noise and disgusting slop.
And in all of this, they've created a cancerous gold rush that threatens to wipe out the entire economy when the jig is up and people realize how useless these claims are, and that at the end of the day, it's a fancy search engine, a calculator, that can think a little better and reason more than the ones of old.
It really feels like all of these CEOs are just borderline running a cult at this point.
> CAD was a drop-in replacement for draftsmen -- and even then, draftsmen are still around today, in slightly smaller numbers
You’re off my an order of magnitude here. Even ignoring departments that were significantly downsized, you would need substantially more draftsmen to do the work currently done by a smaller number with AutoCAD.
The skill required is also much lower, doubly so if you consider Solidworks/Inventor. You get everything for free; design the 3D model and your projections are free.
> I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers
Because labor is the largest line item in almost every software company on Earth. Executives' primary KPI is their market cap, so convincing investors that your profit/expense ratio is going to 2x in 6 months when you finally get full LLM adoption is an excellent way to juice your performance metrics, and thereby your bonus (mutatis mutandis for various finacial setups).
But then what about the fact that it's exposing so many firms to immense risk and essentially straight-up lying to investors as well as product adopters? No one thought of that reality, when the chickens finally come home to roost?
My read is that it's a mix of tech firms having overhired a lot during the ZIRP+COVID era, as well as executives having a pretty short horizon for risk if the potential bonus is large enough.
> executives having a pretty short horizon for risk if the potential bonus is large enough.
This single line explains succinctly what is probably responsible for most of the economic dysfunction of the past 20 years
There's an easy fix, you start hiring again. Probably don't even have to explain it, what happened just before was that you improved corporate finances by lowering your labour expenses, i.e. you're clearly growing and hence you need more people.
lack of regulation in the VC space? I mean, in order to get these vast sums of money, they have to make all these sky high claims, but I feel like in the old days, someone would get at least a wrist slap for defrauding investors
I mean that's certainly part of it, but Altman's grotesque comments today about the idea of raising a child being "more inefficient" than training an AI model there's something deeper, darker, psychologically I think; that the VC people are fundamentally misanthropic and antisocial and despite AI not really fulfilling their desire for a world where humans are entirely fungible they want to sell it that way as a sort of bizarre wishcasting. It's just incredibly odd.
> I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers
> It really feels like all of these CEOs are just borderline running a cult at this point.
Because the people at the top of these companies have absolutely no idea about how the average human goes through life or what a normal human life even is. They don't know what having a job means, what a job is, what it means to be in charge of a family, to struggle for basic things, &c. Look at zuck presenting his ridiculous image gen ai [0], or their embarrassing "Uh HeLp Me MaKe A SaUcE FoR My KoReAn SaNdWiCh" [1], or his Wii tier metaverse that no one above the age of 13 found remotely interesting, this is what these people spent hundreds of billions on, that's what they dream about, that's the future they want even though 90% of the population does not give a single shit about it, And then you have altman and his very unsettling takes on all kind of topics like "ai will develop bioweapons in 2027 but ai is also the solution to this problem", "humans use too much energy" or "I cannot imagine having gone through figuring out how to raise a newborn without ChatGPT” no shit my dude, a gay man who never worked a day in his life, exit scammed his way to the top and who paid for someone to incubate their offspring have no clue about what evolution should have encoded in his DNA over 300m+ years? and we have to give him $7 trillion to speed run the next stage of evolution, lol, lmao even...
Ah, and they need to raise trillions of dollars, literally, that's why they keep mentioning outrageous (but very profitable) things like curing cancer, skynet, terraforming mars and solar powered satellites datacenters even though none of these things make any fucking sense. They need the next """hypergrowth""" vector, one more scam before we eventually reach the point of no return, it's all greed and FOMO as always. One day they shill self driving cars, the next bitcoin, the next AI, always full of "in two years it'll be amazing we promise, I can't explain how or why but give me a few trillions", meanwhile it's going downhill fast for everyone outside of these echo chambers
[0] https://youtu.be/TWpg1RmzAbc?t=570 [1] https://www.youtube.com/shorts/4-9xz77tQnQ
That marketing has been used to fire a lot of people, I think that's an important reason why it has that character. Then there were some true believers too, who were obviously important to the people firing other people.
Being able to remove a lot of people from your large work force and have other corporations do it too is quite profitable, on average it pushes down the price of labour and you'll rehire some of them and replace some of the others, and perhaps your organisation managed to become more efficient at the stuff that make you money in the short run too.
Then there was that thing with the change in US tax code, where R&D became more expensive some years ago.
the real problem isn't bad code getting merged, it's what happens after. when a human writes a module, they carry mental context that makes them the right person to fix subtle bugs 6 months later. AI-generated PRs have no owner in that sense -- they're orphaned at merge. the forge gatekeeping idea is interesting but i think the actual gap is who answers the pager at 2am for code nobody understands.
You overestimate my ability to keep mental context for 6 months.
And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.
Depends on what the context is, at least for me.
The cognitive load on a code review tends to be higher when its submitted by someone who hasn't been onboarded well enough and it doesn't matter if they used an AI or not. A lot of the mistakes are trivial or they don't align with status quo so the code review turns into a way of explaining how things should be.
This is in contrast to reviewing the code of someone who has built up their own context (most likely on the back of those previous reviews, by learning). The feedback is much more constructive and gets into other details, because you can trust the author to understand what you're getting at and they're not just gonna copy/paste your reply into a prompt and be like "make this make sense."
It's just offloading the burden to me because I have the knowledge in my head. I know at least one or two people who will end up being forever-juniors because of this and they can't be talked out of it because their colleague is the LLM now.
fair point on the memory -- humans forget too. but the rubber stamping problem is actually worse with AI-generated code because when something breaks, there's no single human who wrote it and has to live with the consequences. PRs have had low review quality for a while, agreed. the difference is accountability gradient -- a human author who pushed slop has at least some skin in the game when prod goes down.
People will tell you, just ask AI to find and fix bugs.
Let's see how that's going to work. (It's not going well so far.)
Orphaned or as Peter Naur wrote in 1985(https://pages.cs.wisc.edu/~remzi/Naur.pdf), dead programs :)
AI changes little here. It was never guaranteed that an author was available to contact regarding a past PR.
Merging a PR from a non-established contributor is often taking on responsibility for the long-term maintenance of their code.
which is why non-established contributors generally are discouraged from submitting large amounts of code.
This is a luxury more than a need-to-have, lots of companies will punt this to an offshore dev they hired just months ago.
Obviously the ultimate solution here is to put "don't write bugs into the code" in the original prompt.
Often I see youtube videos that sells an overwhelmingly negative take on AI, like "OpenAI" fails 93% of Jobs or "AI is destroying the world" and other weirdly outlandish titles that is clearly aimed at clickbait.
Watching these content I often get confused because it never seems to highlight the actual real world progress and use that LLMs in particular gets for coding.
Much of what was "vibe coding" is becoming just coding now. This means for open source, we are no longer relying on companies that create "opencore" products that nerf/neglect the public version so they can sell their cloud product. We don't have to worry about a maintainer going AWOL on some Clojure or Elixir library and fret about hiring someone who has "20 years of experience". We don't need to pay for a lot of expensive enterprise SaaS tools that charge six digits when we can simply use LLM to internalize existing packages and even create our own.
Those that have been using coding agents for the past 6 month know how much progress there have been and the sheer pace of it to know that we are about to turn the corner, especially as new forms of computing are in the pipelines that will scale even faster without incurring more energy, moving away from text token gen to something else that humans can't read etc.
While it's important to watch different takes, I think someone who consumes only Youtube and these videos that the algorithm is designed to push is going to be shocked and left behind because by the time these videos are produced, things have already progressed or in state of change. All in all, these videos should be treated like ephemeral commentary that ultimately loses their relevance due to the sheer speed of how things are changing.
You make some fair points here, but I'm unclear as to whether you're claiming Geerling's video is an example of an "overwhelmingly negative take on AI".
If so, my suspicion is that you didn't watch it.
he raises some good stats and examples of "slop" and dooming around code gen hitting a ceiling coupled with humans being bottlenecked all very true statements but overall lacking in showing that actually lot of people are using AI coding gen in open source with good success.
btw am I talking to a bot? all of your comment history is pretty similar with lot of emdash.
What kind of open source projects have you been working on? Why do you think most serious open source project reject AI PRs? Do you think you will spin up your own VLC, linux kernel or tensorflow in an afternoon and that you won't need "maintainers"? What about security? Accountability? Group work?
The point of the video is to highlight how the inundation of AI-generated pull requests is harming open source. It doesn't say anything about AI success/failure rates, and it wouldn't make sense for it to go into details about that. However, it does mention that LLMs are useful for some things.
> like "OpenAI" fails 93% of Jobs
I'm always confused how this isn't ridiculously impressive, "After only 5 years, AI can succeeds at 7% of jobs."
Because you're hiding the vast majority of facts, like the fact that we already used every last drop of data available, the fact that they spent hundreds of billions on it and no one makes money besides the people selling them GPUs and memory chips, the fact that contrary to most tech these hings aren't getting cheaper to create, &c.
I suggest looking at some of the recent AI research, as it's not as dead of a field as you imply. Everyone understands that there's no more text and video/image data from the web. There's massive amounts of data outside of this, especially around the context of replacing jobs (think robots). With the last couple years, I don't see any reason to believe today's LLM architectures or training methods are the last to be developed. I think it's really crazy to imply that progress is somehow over.
I'm not sure how money spent is relevant. How many humans are left jobless in the next 20 years should be the concern. If it's 7% in only 5 years, with the very safe assumption that there will be progress, it's still not looking good.
All I'm saying is the hype around these things is always 10x what the reality ends up being, every single time. Autonomous cars, bitcoin, VR, metaverse, it's always the same story, a loooooot of hype, a bit of actual delivery, and move on to the next sca...project
Where does the 7% number come from actually? no one knows... where are the hundred millions of unemployed people? no one knows, where is the productivity increase? no one knows. It moves fast and their shoveling a lot of shit down our throats, that I agree with, I'm just not seeing any of the magic or "AGI in two weeks my dudes" type of things.
Not all hype is the same. Internet was hyped. Automobile ("faster horses please"). Telephone ("we have messengers on horses already"). Radio & Television ("nobody is gonna sit in front of a box all day!"). Computers ("nobody wants these clunky loud things in tehir home"). Smartphones ("no keyboard its useless).
None of these industries had weirdos asking for $7 trillion to cure every disease and solve every problem known to man, they all grew organically over decades before finding their uses without being actively forced into every aspect of your life 2 years after the mvp was released.
In all of those cases capital chased a future reality, railroad, electricity, internet all had massive speculation. the dot com bubble came and went but internet stayed.
$7 trillion looks cartoonish due to inflation, you'd have to normalize it with other economic numbers. Large companies are funding capex with bonds that still within healthy margins per employee head.
> to cure every disease and solve every problem known to man
not sure why you view this so negatively, that would be absolutely worth every penny. Granted there is a lot of noise too but dismissing everything because of the volume is premature. Technological shifts rarely produce catastrophic labor supply/demand shocks that last, it changes how we work and the market adjusts.
You are conflating the hype layer typical with all major technological breakthroughs and the subsequent capability that gets baked into humanity as a result.
What does AGI have to do with it? Look up the top employer for your state [1] (or top few). If you go with the definition that AGI allows novel ideas/innovation, you'll see AGI is not needed for most jobs.
Don't fall for the hype or anti-hype.
[1] https://worldpopulationreview.com/state-rankings/largest-emp...
"Agentify is a small collection of utilities and MCP servers focused on safety, ergonomics, and automation."
Cool advertisement bro. This is how it must have been when they marketed cigarettes to women to drive up sales.
"freedom torches" was the exact phrase used. They were sold as a marker of female liberation.
https://en.wikipedia.org/wiki/Torches_of_Freedom
The term was first used by psychoanalyst A. A. Brill when describing the natural desire for women to smoke and was used by Edward Bernays to encourage women to smoke in public despite social taboos. Bernays hired women to march while smoking their "torches of freedom" in the Easter Sunday Parade of 31 March 1929,[1] which was a significant moment for fighting social barriers for women smokers.
Bernays is widely seen as the father of modern marketing, and helped lay the foundation for the consumer-based economy.
what part of https://agentify.sh offends you so much, can you please point it out ?
My guess is that this is going to be everything other technology that's democratized. You see a flood of low quality output because you have a lot of new non-technical devs. Some of these are good enough to crowd out some of the prexisting tools. The volume creates noise which also makes the good stuff harder to find. Eventually an ecosystem starts forming around these low hanging products which fill the gaps between pros and amatures (think of what happened to video editing and Apple). Eventually you have more people creating a better product in the long run. There is a bit of a feedback loop here as AI gets better, it makes the products it outputs better, which inturn can benefit AI as it learns from improvements.
Previous discussion of the text version: https://news.ycombinator.com/item?id=47042136
Human slop can't get enough of this topic.
I wonder if we'll reach a breaking point with public forges, where they'll simply reject hosting a repo if it isn't from someone with a vetted background or if it detects hallmarks of LLM slop (e.g., many commits over a short period of time or other LLM tells).
GitHub recently added new repository settings to turn off pull requests or limit them to approved contributors. The announcement doesn't mention AI agents, but that's certain relevant.
https://github.com/orgs/community/discussions/187038
GH also needs to find a way to stop AI scraping of IP.
(Or not. It might be lucrative to host some novel algorithm on GH under a license permitting its use in generative LLM results, at a reasonable per-impression fee.)
I think there'll be space for curated forges at some point but they're going to live on the margins like most self-hosted repos do.
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
I think that AI will do the vetting of repos - just as humans do that now. Perhaps AI will do a better job. The only way we're gonna fight AI slop is with AI.
[dupe] Discussion: https://news.ycombinator.com/item?id=47042136
Some cleanup needs to happen, when the dust settles.
It's just not clear to me who, or what, will do it.
The real problem is that AI doesnt make any money. In fact, AI companies and Buisiness units hemmorage cash. When AI is eventually priced to the market cost the use-case for this all collapses.
Alternative opinion: https://news.ycombinator.com/item?id=47119404
Personally I agree with the alternative opinion that it will be a golden age. I'm embarking on a project that involves refactoring something I did 18 years ago. I'm assuming that it'll take 1/10 the time to make a much better modern version with the assistance of LLMs.
[dead]
FOSS was the boot code. And the gullible evangelist pepl.
OpenClaw Peter is using codex to analyze/de-duplicate PRs, extract good ideas from them and then re-implement them.
> I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it.
Some people bitch, others are real engineers solving novel problems.
https://x.com/steipete/status/2025591780595429385?s=20
I know someone who started making a game by building his own engine. 5 years later he had made half an engine and zero games made on it.
Most of the people I know that are into herding AI spend most of their time doing that, but I can't say I've seen them accomplish much more than other colleagues, even the ones just using built-in AI or copy pasting code from an AI chat.
> Some people bitch, others are real engineers solving novel problems
My most disliked thing about AI so far isn't AI itself, it's how nasty AI evangelists behave when it's criticized. You don't have to attack and/or insult people, you could have just left out that last bit.
You are confusing trolling with not handling AI being criticized.
It's funny seeing programmers mind shut down when faced with an easy to fix problem - too many PRs, just because they hate AI.
That's an incredibly dismissive attitude to a real problem.
"They aren't paying their dues..."
Author sounds like a relatively well off white dude in the 1950s.. 60s, 70s, 80s, 90s...
I get it, everything is being massively disrupted right now. I'm not trying to say ai is good or that bad, but the authors argument is weak.
The aesthetics of an argument is not the argument.
It's actually sickening that you are defending billionaire's toys, which make work for people already working for free; AIs are constructed from the illegal and unethical expropriation of labor, here and abroad.
Invoking the idea that it is classist or racist to reject yet another transparent power grab by the Epstein class against labor is maximum peasant brain.