I gave up on r/programming after an article I wrote (thoughtfully, without AI, even though the content might not have been super interesting) got mod-slapped with a stickied comment "This content is low quality, stolen, blogspam, or clearly AI generated".
Ironically, that comment was added three months after I posted the article, when it was nowhere near the front page anymore, in a clearly automated and AI-driven review.
Reddit is a low-quality platform, the sorts of people who would be interested in moderating a popular subreddit like r/programming are even less fit to be moderators than the average moderator is. It would be better if people completely stopped using the platform.
Do you think maybe it's the disclosure about self promotion you added? You explicitly say the purpose of the blog post is to promote your consultancy, so that might be why they marked it blogspam. I know it feels like you're being forthright, but really that you're promoting yourself is implicit in the fact it's a personal blog, so you can leave that out and still be honest.
Yeah, maybe it's that, though I still wouldn't expect someone to categorize the post as blogspam, even if they just glance at it. (At least according to my definition of blogspam, but I guess each has their own.) But yes, pragmatically I should probably remove the disclaimer.
I gave up on Reddit after many years of acting as characters on the Venture Bros subreddit. Every so often I would retire my account and begin a new one. I've had MANY over the years. I've used Reddit "cleanup" apps to remove/clean the content I've created. Good stuff over time, very niche and specific to VB.
I gave it all up when Reddit started recycling my old accounts and reposting my content as if it were new -- but not authored by me, just regurgitated back onto the site.
If that happened to me, you can bet it's happening en masse. Which indicates to me that the site is really dead.
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
> AI programming is fundamentally different from programming
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
Hopefully this does not devolve into ‘nuh-uh’-‘it is too’ but I disagree.
Even putting aside the AI engineering part where you use a model as a brick in your program.
Classic programming is based on assumption that there is a formal strict input language. When programming I think in that language, I hold the data structures and connections in my head. When debugging I have intuition on what is going on because I know how the code works.
When working on somebody else’s code base I bisect, I try to find the abstractions.
When coding with AI this does not happen. I can check the code it outputs but the speed and quantity does not permit the same level of understanding unless I eschew all benefits of using AI.
When coding with AI I think about the context, the spec, the general shape of the code. When the code doesn’t build or crashes the first reflex is not to look at the code. It’s prompting AI to figure it out.
It is not. One version of a compiler on one platform transforms a specific input into an exact and predictable artefact.
A compiler will tell you what is wrong. On top of that the intent is 100% preserved even when it is wrong.
An LLM will transform an arbitrarily vague input into an output. Adding more specification may or may not change the output.
There is a fundamental difference between asking for “make me a server in go that answers with the current time on port 80” and actually writing out the code where you _have to_ make all decisions such as “wait in what format” beforehand. (And using the defaults is also making a decision - because there are defaults)
Compilers have undefined behaviour. UB exists in well defined places.
Even a 100% perfect LLM that never makes mistakes has, by definition, UB everywhere when spec lacks.
Right, they allow for the idea of gradual specification - you can write in broad strokes where you don't care about the details, and in fine detail when you do. Whether the LLM followed the spec or not is mostly down to having the right tooling.
The value is in the imperative, the computer does what you tell it do, The control is very powerful and is arguably a major reason computer technology is as power and popular as it is today. Bits don't generally speaking argue with you the same way analog programming if by electronics or mechanical means did before the transistor.
You can certainly write in imperative or functional but you are still telling the computer what you want. LLM use impercise language can generate loose binding the actual reality people one. They have there use cases too but they have a radically different locus of control. Compilers don't ask you to give up percision either they will do what you tell them to do. AI can do whatever it thinks is the most likely next token which is foundationally different from what we do when we engage in programming or writing in general
If you use an LLM to generate source code you are vibecoding.
You specify the problem in natural language (the vibes) and the LLM spits out source (the code).
Whether you review it or not, that is vibecoding. You did not go through the rigor of translating the requirements to a programming language, you had a nondeterministic black box generate something in the rough general vicinity of the prompt.
Are people seriously trying to redefine what vibecoding is?
No, that is literally vibecoding. Reviewing vibecoded source is just an extra step. It's like saying "I'm not power toolgardening, I use a pair of gardening scissors afterwards." You still did power tool gardening.
As additional proof, the dictionary definition of vibe coding is "the use of artificial intelligence prompted by natural language to assist with the writing of computer code" [1]
It seems like vibecoders don't like the label and are retconning the term.
Both you and the Collins dictionary (merely one dictionary, not an absolute anuthority) are retconning. “Vibe coding”, as originally coined in this tweet, means something more specific: to generate code with LLMs and not really look at the output. The term itself suggests this too: reviewing code is not exactly a vibes-based activity, is it?
That tweet coins the term, we agree there. The activity it describes is using natural language to generate software. Whether you add a review process or not doesn't substantially change that. Sure, Karpathy says he doesn't "read the diffs anymore". Why does he say "anymore"? Clearly he was reading them at some point. If not reading any diffs was a core part of the activity, that wouldn't be the case, the tweet itself clearly outlines that as optional. He's clearly not talking about a core part of the activity.
I think the tweet is pretty clear on its intention for the definition and I’m not interested in arguing about it.
I do think the dictionary definitions, such as they are, are coming from a real place: some people do use the more general definition. And you seem to already know about both definitions. So why argue so belligerently and definitively in the first place? Parent comments you were replying to were obviously using the original definition. Talking about “retconning” is obviously silly given this timeline. Meaning in language is not a race to be the first to make it into a dictionary. It’s a very new phenomenon that new terms make it so quickly into a dictionary at all, and they’re always under review. So maybe factor that into your commentary?
Because I believe the broad definition is more widely used, I also don't think the narrow term is useful or meaningful, and I think it's being used purely by vibe coding practitioners who feel that the term has negative connotations.
This all started with the parent comment telling someone else (belligerently and definitively) using the broader definition that they were wrong.
The narrow term is very useful, there is obviously a world of difference between reviewing the output of an LLM and not - the latter is irresponsible. It shouldn’t be surprising that people bristle when being accused of it. It doesn’t make sense to accuse someone of redefining a term to make themselves feel better when the history of the term shows that yours is the redefinition. The simpler explanation is that the accused just doesn’t like being called irresponsible - not that they’re trying to defend LLM code generation from someone who doesn’t like it.
You're saying what I'm saying. They feel self conscious about the term "vibe coding".
And to be clear, nobody accused the people who lashed out here. They reacted to general statements that people are vibe coding.
I also don't understand why the term vibe coding couldn't contain a spectrum of responsible use. Just say you're reviewing your vibe coded commits!
Clearly the issue here is about how vibe coders perceive the term vibe coding. Some of them feel that it's demeaning and are trying to wiggle their way out of the label by arguing semantics.
No, people think it’s demeaning because they are using a different definition to you, the definition which was the original one. Don’t know how I can put it clearer.
I mean, no, why would it be? There is so, so much to talk about in programming other than AI. Meanwhile, the current HN front page feels like 90% LLM spam: the complete antithesis of what I used to come here for.
I personally can’t wait for no-ai communities to proliferate.
Taking your estimate as a superlative, it would be asinine for the community here to censor AI-targeted discussions in the way I think you'd like to. The same goes for a programming community that censors discussions about LLM programming.
You are basically asking for a brain drain in a field that—like it or not—is going to be crucial in the future in spite of its obvious warts and poor implementation in the present. If that's what you want, be my guest and encourage it; but who's authorized to unilaterally make that decision in a given forum?
In the present case, the moderators for r/programming are. But they're making a mistake by marginalizing the technology that's redefining the practice because people talk about it too much instead of thinking about how to effectively talk about it and then steering the community in that direction.
But that's a full-time job. Which is why I think HN may turn out alright in the long run or a similar community will replace it if it fails to temper the change in the industry.
What this decisions signals to me is that r/programming has been inert for some time. I'm sure plenty of programmers, irrespective of their position on AI, probably find the community rejoicing in their resignation to the technologies influence as their cue to finally exit.
There can't be any interesting discussion about AI programming. Every conversation boils down to what skill files you use, or how Opus 4.6 compares to Codex, or how well you can manage 16 parallel agents.
There genuinely is a lot of interesting discussion to be had about LLMs, and I know this is true because I discuss things with my coworkers daily and learn a lot. I do admit that conversation online about LLMs is frequently lacking. I think it's a bit like politics - everyone has an opinion about it, so unfortunately online discourse devolves to the lowest common denominator. Hey guys, have you noticed that if you use LLMs frequently it's possible you'll forget to think critically?
But "there can't be any interesting discussion about AI programming" is completely false.
For me, almost every single time a conversation like this happens in real life it boils down to the one side claiming that "This is the future" and "Don't get left behind" followed by a torrent of hype and buzzwords. So no, there is no interesting conversations to be had about LLM programming anymore.
Some interesting conversation one can have with coworkers specifically:
1. How should code review and responsibility for code be updated to a) increase velocity, b) keep quality and c) keep reviewers from burnout. There are plenty scenarios in which vibe coding a component in an afternoon is the correct choice, even if it is buggy, insecure, and no one really understands it.
2. Which parts of the codebase work well with code assistants, which don't? Why? What could be changed to make it easier? In my experience, Claude Code sometimes loses its mind on infra topics. It is also not very good at complex, interconnected services (humans aren't either).
3. Which tasks could be offloaded to agents to save everyone time and sanity? - Creatig Jira Tickets from meeting transcripts is an obvious one, collecting and curating bug reports another one.
4. How should we design systems to better work for coding agents? Does it influence our tech choices? Should it influence them?
5. Is AI a net positive or negative for security?
And so much more. The last topic in particular is incredibly important, and things are developing so fast that you can probably have a new conversation on it every two weeks.
Maybe you struggle to have good conversations because I just provided an anecdote and you immediately stated that my anecdote is false? If this is how you typically interact with people I’m not surprised you’re not having interesting conversations.
He didn't state your anecdote is false, the first two words of his comment are "for me". That means in his experience, not yours.
Ironically, in your crusade to wave the "I'm being censored!" flag, the only person who is trying to do any censoring... is you!
And to top it off, as if that wasn't enough, you're also incredibly snarky and basically implying that this person who just vaguely disagrees with you must be unlikable or something. Which, in another twist of irony, actually makes YOU appear unlikable, because what well-adjusted adult would feel the need to throw someone under the bus for slightly disagreeing with them?
The problem is that mrcsharp added the last sentence: "So no, there is no interesting conversations to be had about LLM programming anymore." So they are definitely trying to turn their anecdote into a universal.
Most of your comments about johnfn are still apropos, though...
> "there is no interesting conversations to be had about LLM programming anymore"
The first sentence was in his experience, but this is a universal assertion. He is claiming no one, in the world, is having interesting conversations about LLMs.
Is my response really so off-base? Imagine you say "I like Rust because it made my app go fast" and someone replies "There is no one who has used Rust to improve performance." Do you really think that's a normal way to respond to someone sharing an anecdote?
But that's not how he responded because there's a whole ass comment before that.
Okay sure, if you read the comment and then use a Men in Black mind wiper thingy before reading the last sentence, then it might seem brazen or universal. But that's not what you did.
The only way that last comment can reasonably be taken to mean "for everyone on Earth" is if you did not read the lines before it. Because, in that context, to me, it's clear he is only talking about his experience.
This is a phenomena I've noticed lately where everyone feels the need to add a disclaimer for everything and not doing so is seen as an "aha gotcha!" type thing. But we're not algorithms. You do not read one line at a time and then digest it.
You're human, he's human, and there's context. You know that it would be extremely unreasonable for someone to think that nobody, anywhere, has anything to say about LLMs right? Okay. That doesn't mean that this person is being unreasonable.
Read his comment in context. The comment thread, condensed, is:
mudkipdev says "There can't be any interesting discussion about AI programming". We agree that is a universal claim. (Right? I mean, your comment says "You know that it would be extremely unreasonable for someone to think that nobody, anywhere, has anything to say about LLMs right" but isn't this a clear example?)
I say "There can be". We agree that is anecdotal.
mrcsharp says "For me, [stuff that supports mudkipdev]. Therefore, there is no interesting discussion".
He is re-asserting mudkipdev's point. mudkipdev says A, I say !A, he says, actually, A.
Your interpretation has him read the back-and-forth between me and mudkipdev, and respond to "A", "!A" with "B". If you only read mrcsharp and nothing else in the thread I can understand this reading, but the context changes things.
My pet peeve with all LLM discourse is whenever someone mentions any problem they experience with LLMs or any mistake they make, someone comments that humans make the same mistake.
I disagree and you could reduce basically anything to this: 'there can‘t be any interesting discussion about React. Every conversation boils down to which framework you use or how you manage state or whether you use typescript or javascript‘
All of those are opinions about programming. Which framework, which language, etc.
Conversations about which model to use aren’t conversations about programming.
A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.
That is exactly why I left reddit. r/javascript had almost completely abandoned JavaScript discussions for React and Angular while r/programming was half filled with irrational JavaScript fear nonsense.
That isn't why /r/programming banned it. They banned it because every discussion about LLMs inevitably devolves into discussions about AI slop in varying levels of civility, and the rare good LLM submissions/discussions do not offset it.
Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.
Genuine question: how to distinguish yourself from the stream of slop?
I am also annoyed by the endless stream of articles and projects related to LLM-assisted coding. Not because I dislike LLM-assisted coding as an idea, but because it's all more of the same (as you said). I think that there are still a lot of low-hanging fruit in improving LLM harnesses that no one is working on because everyone seems to be chasing the latest trends ("agentic", "multiagentic", "skills") without thinking bigger.
But I'm afraid that if I finally invest time and implement some of my ideas on making LLM-assisted coding better (reliable, safer, easier for humans to interpret and understand generated code), I won't be able to gather any feedback. People will simply dismiss it as "yet another slop for creating more slop" and that's it.
Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust
You have not seen my recent WhatsApp chats. Me and a pal are talking about what we're doing with Claude code, and it's quite interesting!
Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.
Seems a lot of commenters here dislike their decision, I like it though.
LLM-generated projects, articles, blogs are low-effort products lacking authenticity.
And the discussion on LLM itself can in the long run be fairly tiring, follow r/LocalLLaMA for a while and you'll see what I mean. But if you are really into LLMs though, that sub is great.
It is simply not fun to go on to a subreddit, seeing 90% being projects and blogs that is obviously created using AI, and authentic content being pushed to the side due to the high volume of artificial works. r/Python was horrible at one point, but the mods have been stepping up their game.
> LLM-generated projects, articles, blogs are low-effort products lacking authenticity.
I think this is mostly true but not completely true, LLMs are a tool and right now we are learning how to use it, how to use it well and more importantly how not to use them.
I created an account and started reading this site primarily for programming news when r/programming took a precipitous dive in quality around 2020 or so. Before it was an example of one of the few good communities there, but it quickly became show and tell (ironically this was against its unenforced rules). And any real interesting posts had no discussion. But then I noticed the "Other Communities" tab would show posts from a HN posts sub that tracked posts here, and suddenly I was able to get great information. A post about CockroachDB that had 20 boorish comments complaining about its name over there would have the designer of it over here answering technical questions about its capabilities.
THAT SAID, I think this might be what gets me to go back to that place. I used to come here to read about new Python tooling, latest database development news, interesting thinkpieces on development practices, etc. Now it's dominated by AI evangelism, "I'm Showing HN™ What I Used By Claude Tokens On :)", AI complaining, AI agent strategies, AI's impacts on the industry news, etc. There are some non-AI posts but not as many good ones as there used to be, and a lot of the non-AI posts quickly turn out to be AI written. Because they respect their time as a writer greatly and my time as a reader not at all. It's ClankerNews, the Hackers are in short supply.
They switched their best sorting algorithm to be engagement based rather than upvote based [1]. Upvotes are just one of many metrics, but heavy comment interaction is another. It incentivizes rage bait and performing for the crowd with every comment and post. They also switched into an almost purely moderator curated frontpage [2] rather than allowing users to vote.
I've wondered the same thing, but you growing up definitely has to be a factor.
> Just angry people scolding each other all the time.
This really does describe it perfectly. I don't know about others, but focusing on my career pulled me out of a relatively low-income and dysfunctional environment. Reddit too often reminds me of people I used to know in real life.
It's been so many years since then, and finding and living a better life was so intertwined with my young adulthood that I almost convinced myself people like that don't exist in real life anymore. I thought the whole world had moved on, but search results nowadays prioritize Reddit enough that I'm routinely proven wrong.
Contrary to popular belief, I don't think most of the stuff on there is fake. Those people probably really are like that. Certain ways of thinking can become so normalized that they don't even see what there is to be ashamed about. What I sense the most on there is a lot of stress and the resulting irrational fears that pour out of people when they feel too much pressure. People under a seemingly endless and vague threat will go a little nuts and start to swat at anything that disturbs their worldview.
A good test for any community is: try posting that is factually incorrect but that supports the agenda of the community. Does the community call it out? In Reddit it does happen.
In my experience, that kind of thing might only get called out by moderators or the outliers who reply the most. They're the ones with the strongest interest in proving anything. Only then will the rest of the community dogpile. Otherwise, it goes ignored.
It was inevitable given it's a top 7 most popular site.
The reality is, the masses, the real world, the average person. Is an asshole.
It doesn't reflect in the real world, because people learn to hide their assholeness at a very early age (Or they learn how to get punched in the face).
On an anonymous forum. You don't have to hide your assholeness.
Frankly it's amazing the site never devolved into 4chan. I attribute that to all the people doing free labor --> mods.
Reddit turned way more into an echo chamber over time. The moderators and the downvote system destroyed the site. The shift from free speech, libertarian and anarchist ideology into heavily left leaning definitely didn't help.
I know this snarky, I'm sorry ahead of time. But I don't know how else to make this point...
The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.
I use LLMs yet I don't care to read about them or their usage at all. I can certainly see the reason why a place called "/r/programming" wouldn't want to have discussion about agent usage either, since it's not programming, it's a different activity.
Yeah I totally get the rule. I use LLMs when developing. In fact, I've been out of Claude tokens for the week since Wednesday, but I use Claude specifically for the boring, simple stuff I don't really want to do, but that Claude can. I'm simply not interested in discussing anything LLMs are able to do, it's not interesting.
It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.
Following up, anecdotally, people I talk to who are excited about LLM development usually either care more about product development, or don't have programming skill enough to see how bad the software is. Nothing wrong with either, but it can get tiresome.
> people I talk to who are excited about LLM development usually either care more about product development
This is an interesting thing I've also noticed in public hobbyist forums/discussion spaces where someone who is more interested in making a "product" clashes with people who are just there to talk about the activity itself. It's unfortunate that it happens but it will self-correct over time (like /r/programming here) and the LLM enthusiasts of Reddit will find another place to discuss ways of using them.
I have no reason to believe AI is used as much or as widely as you claim.
In my industry AI is fully available and was almost forced on us, and yet nobody is using it. The process of using AI and then scrutinizing the output is just more work than doing it manually. The most I encounter AI is when running job interviews and watching candidates read AI generated answers off a screen.
My industry also tends to skew much older than where I came from previously writing JavaScript full time. We are also fully remote with lots of status meetings. If I were less confident in my ability to communicate in writing maybe I would be more inclined to use AI.
I also don’t really envision AI accomplishing my multitude of daily managerial and administrative assignments that I have in addition to agile stories and writing code. Comparatively, writing code is the trivial part.
It may not be a, in denial, hiding their heads in the sand situation.
Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?
I'll give you one personal example:
The year Caitlin Clark was drafted to the wnba.
r/wnba went from a subreddit of 9000, to eventually 200k subs.
We were bombarded with CC posts every hour.
- Some of it was trolls staging a race war (this was during US elections).
- Some of it was genuine CC fans, who wanted to talk about CC.
- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).
So what happened was, at any given day, 80% of the front page was CC content.
At that point, we might as well have been r/caitlinclark.
So the mods did something drastic and controversial. They banned all "low effort" CC content.
WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.
The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.
If you can possibly believe it, there are millions of programmers who have never and will never touch <your favorite tool>
Python, cmake, bash, perl, every conceivable tool or language, there's millions of people in the industry who will never touch them.
This might be a wild concept, so make sure you're sitting down: the field of software engineering is unfathomably larger than your personal, extremely narrow, viewpoint.
I have yet to run into any serious project in the wild that is using LLMs for development. I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.
Vibecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.
What do you consider "serious", as that seems to be the main differentiator here. I know plenty of serious (multiple years of development and users, and began prior to LLMs) projects that have devs using LLMs for development.
I see. I would not consider these to be serious projects.
I'm sure they give people joy, and contain some decent problem solving, but they are both small extensions to existing software, and are not really solving interesting new problems or creating their own systems.
The size of the project is not what matters. It's the systems it includes and how complex they are.
Actually, that raises a question: are the more complex systems in this collection of projects vibecoded or programmed by a human? Which parts are you aware of that invlude LLM output?
How can you still can't distinguish between using LLMs as tools and a non technical person vibe coding? I have yet to run into any serious software engineer that had to dive into a legacy codebase or an unknown tech stack and found no value in e.g. Claude Code for general understanding and refactoring. Not even talking about coding, just the capacity of generating custom contextualised documentation and examples tailored to your constraints and skills on the fly is ridiculously helpful.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
Ok well I have plenty of serious, production-level professional experience that says otherwise. Not “vibe coding” - we certainly review the code. It’s a tool that has downsides and failure modes, of course, but it’s at the point where it’s definitely speeding us up and we are using it a lot. Trust me, I’d prefer a world, on balance, where this wasn’t true – I don’t like many of the aspects and uses of the technology – but its utility in programming is undeniable now and the capitalists aren’t taking “no” for an answer.
TypeScript and Go on a 1.5 million SLOC production codebase; a complex SaaS tool for financial planning and analysis. Quite far from being “just CRUD”. Before Anthropic Opus 4.5 I was trying out Claude Code and wasn’t all that impressed, but since then it’s definitely helped. The project I wrapped up before Christmas would have gone into the new year without it. You’ve still got to keep a close eye on it; whenever I’ve got lazy with review, trusting it too much, I’ve always regretted it. It’s never one-shotted anything, even with plan mode and all that. I’m a natural skeptic on this stuff and was very actually skeptical for most of last year. But I’m very confident there’s a large net productivity gain now.
I don't know where you got the "just CRUD" quote from. I never mentioned CRUD. But this sure sounds like it would be CRUD with some additional models in the backend.
What makes it not just CRUD? Is it using some complex model for forecasting?
I wasn't quoting you, they are scare quotes. It is a web app with a TypeScript and Go backend, but to call it a CRUD app would be misleading (despite the fact that yes once you boil it all down, everything is a CRUD operation) because it's a complex and flexible web app. It's the kind of thing that a bunch of people would prefer to be a native app; it's spreadsheet-like.
> I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
They weren't useless, they proved if the direction that the prototype was exploring was worthwhile. I've personally made many completely shit code prototypes in the years before we had LLM's, of course they weren't magically production ready, that's not the point of a prototype.
> I have yet to run into any serious project in the wild that is using LLMs for development.
How about Claude Code? 100% of it was vibe-coded according to its creator.[1] Google and Microsoft also claim a lot of their internal code is AI-generated now. [2] [3]
Naturally, none of the big tech companies will just release a pure vibe-coded project due to structural reasons, but you also _seriously_ can't claim that serious projects don't use LLMs as well these days. Maybe in your limited experience, it isn't true, but that doesn't generalize to what's actually happening.
> How about Claude Code? 100% of it was vibe-coded according to its creator.
Agents are trivial to make. I don't know whether that means they're not "serious", but it's exactly the type of thing you can make yourself in a very short time, and exactly the type of thing even LLMs can't fuck up too bad.
With regards to the overall point, I think the existence of projects using LLMs to do development doesn't really lend credence to the idea that they're somehow preferable or desirable. People tend to use hyped things, imagine they're useful even when presented with evidence to the contrary, and generally be very resistant to sobering realities.
It took years before people stopped running hadoop clusters to do things that a single linux box could get done 10x faster with some basic pipes. I'm sure there are still people who have "serverless backends" that work terribly in every regard in comparison to literally just a linux VM somewhere. People in software development tend to find these types of things every once in a while and adopt them wholesale.
This cycle is helped by the fact that the field has been growing constantly and a lot of the adoption comes from kids who don't know any better. Every piece of shit technology that comes and goes has meat for the grinder coming straight out of university.
Would I put LLMs in the same category as these previous (nearly) useless things? Probably not... But you should never trust peoples' perception of usefulness when it comes to almost anything in software development.
I don't know if Claude Code is actually a great example. If you have used it for longer periods of time, you will have noticed how insanely buggy it is. And for every bug that they finally fix, there seems to be a new one introduced.
I don't even mind vibecoding. I have vibecoded a couple of tools that me and my coworkers use every day to make our lives easier but I'm not going to pretend they are anywhere close to something that I would release to the public.
It’s juvenile to consider all LLM assisted coding as vibecoding. I’m not going to expand here because this topic is about as much fun to discuss as politics, but coding assistant tools are just tools.
If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.
I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.
Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
> The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
I can't tell this is a joke or not. Do people really care? Or maybe the U.S. thing? At least in my country nobody cares....
I hate AI video, I hate AI art, but if you are pretending that AI isn’t going be writing code for 99% of projects going forward you are absolutely kidding yourself.
AI video and art is going to be increasingly used in advertising, news/reporting, games, etc. Therefore, you aren't allowed to hate it or even complain about it. Right?
> We are SO past the point of software being developed without LLMs at _all_
That's exactly why I've given up on programming, development or career subreddits. There are a lot of interesting software engineering challenges opening up, but instead of discussing it like professionals it all gets drowned in a big negative mixture of rants against the financial AI bubble, companies using AI as an excuse to lay off, and a general antiwork vibe. All these subreddits have become feel good/bad echo chambers for angry teens and students with no real world professional experience.
So you really don't understand why people with real professional experience might be anxious now, and why there is an antiwork vibe? It's not just junior devs.
I understand why they might be anxious, but my point is it’s unrelated to the technology itself. Imagine people denying the internet works in 2000 because of the Dotcom bubble. Same with layoffs, they are not really due to AI (https://en.wikipedia.org/wiki/2026_United_States_corporate_m...), it’s a political discussion. And the antiwork vibe is not new, I have strong political convictions on how we should more equally redistribute capital gains so that even if AI was able to replace software engineers it would not be an issue but again that is political.
LLMs tooling brings a lot to senior devs. I have 15 YOE, I own a small agency, we are shipping faster, with less bugs, and believe it or not we are hiring, because we are able to take more work and grow, as it is logical without the political issues plaguing the US in particular. The market is already adjusting, hence why to me we are way past the point of developing professionally without LLMs.
So no I don’t get why the political topics are not discussed elsewhere and the irrational denial of the technology because of said political issues.
I know LLM generated code comes with it's own challenges, but the absolutists are definitely clinging to a time that has passed.
I saw a recent discussion on Immich where a maintainer flatout denied a PR saying "That diff looks LLM-generated to me; is that indeed the case? If so, we'd prefer not to receive a PR for it"
The PR was from a professional software engineer, who worked weeks of his free time on a big feature. Well structured + tested. Dismissed just because AI was used.
https://github.com/immich-app/immich/discussions/23745#discu...
"Well structured + tested". Who would know? The diff is almost 200k changed lines. Good on them for saying no to this nonsense.
There's a good chance the actual needed implementation is less than 20k lines (I've found that LLM bloat grows exponentially), but even that's a stretch to review and accept wholesale.
I'm the person working on that fork. Yes, it has now diverged 200k+ lines, but half of that is specs, research and documentation and includes a month worth of work.
The comment in question was a small feature of about 1.5k lines changed and it was solidly tested.
Eh, fair enough. 1.5k is reasonable. Have you tried just writing it yourself instead? It's likely it'll be less than 1k lines and you should have no problems writing an implementation yourself if you understand the structure of the LLM version.
Why would I write it myself? I use Claude code 12 hours a day and I'm really confident with want I'm able to build with it. I use it at work with incredible results. Spec driven development with harnessing is super powerful, I'll never be writing large features hy hand again.
Heh, fair enough. To me this comes off as "I'm unable to write it myself [possibly because I've outsourced my thinking too much]", to be honest, but I'm not going to argue; you're the one who presumably wants this code to end up in that repository.
I wouldn't really consider (what is likely) sub-1kloc a "large feature", but to each their own.
I don't want it to end up in that repo anymore, hence the fork. I've got a growing community of people who have been eagerly awaiting this feature and a ton more that I built.
I definitely could write this by hand - the stuff I built in the last 10 years before LLMs was more complex than this - but theres no way I'm spending all my free time to slowly craft something if I can just use AI and get the same results much faster
Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
You are being overly dismissive of a mindset you obviously don't understand. Of course being anti-AI is about decent living conditions for humans. Most of us don't believe in singularity or Matrix-style threats.
But current AI is actively destroying our breathable/livable planet by drawing unmatched quantities of resources (see also DRAM shortage, etc), all the while exploiting millions of non-union workers across the world (for classification/transcription/review), and all this for two goals:
1) try to replace human labor: problem is we know any extracted value (if at all) will benefit the bourgeoisie and will never be redistributed to the masses, because that's exactly what happened with the previous industrial revolutions (Asimov-style socialism is not exactly around the corner)
2) try to surveil everyone with cameras and microphones everywhere, and build armed (semi-)autonomous robots to guard our bourgeois masters and their data centers
There is nothing in this entire project that can be interpreted to benefit the workers. People opposing AI are just lucid about who that's benefiting, and in that sense the luddite comparison is very appropriate.
You have misinterpreted my comment. But I concede that I should have written it more clearly.
I divide anti-AI people into two groups. Those who don’t like AI because of what it is, and those who don’t like it because of its impact on society. Naturally there is an overlap.
Luddites were not opposed to the technology. So the comparison to them is only correct for the latter group.
Not talking about LLMs on a forum is not going to change anything in the grand scheme of things. It could be a protest, but I see it more (the feeling I get from the announcement) as a means to protect the forum from being overrun regardless whether AI is ultimately good or bad.
Also note that nowhere in my comment I have stated my position in this argument.
I'm not really convinced there's people who don't like AI "because of what it is". I mean, because of what it is, beyond any social/political considerations.
The only case i know of that is when there was an open letter with Sam Altman and other AI investors calling out the existential danger of AI, which in my view was a way to divert the debate from political questions to hypothetical Matrix/Terminator questions about consciousness and singularity.
really? is it so hard to believe that people dislike AI because it is unreliable, can't be trusted, changes how we work with code, takes the fun out of coding?
i am not worried about social consequences. society can adapt.
i am also not worried about energy use. we have endless clean energy if we can figure out how to use it.
yes, i am worried about society choosing the wrong adaption. that is, i believe we should train everyone to be teachers, doctors scientists, and artists. the stuff that AI should not be doing. but i am not worried about using AI for automation, putting people out of jobs. if we give them the opportunity to learn new jobs and,
IF, AND ONLY IF, we get AI to do it's work with 100% reliability and accuracy.
only then AI will be useful. i have tons of software projects that i'd like to get done. but i can't trust AI to do them for me, because i would spend even more time to verify the results than i would to code it myself.
so yeah, i absolutely don't like AI for what it is, a tool with limited uses that requires me to work in a way i don't want, if i want to benefit from it.
Oh, thank you for clarifying! That is entirely believable, and i'm also one of these people then. I just didn't understand what you meant. I thought you meant people hated AI for being creepy alien tech from scifi movies, not for being unreliable, untrustworthy, etc...
Favourite genres of posts on HN in the past 2 years:
* “I am bullish about AI”
* “I am an AI skeptic, [long rambling], but overall, I am bullish about AI”
It’s amazing how even criticism of the technology somehow ends up being a hype post. At least there are still places on the Internet where we can have a serious discussion about the downsides.
As someone who wrote recently wrote the latter post (https://news.ycombinator.com/item?id=47183527), the more nuanced approach that "AI has good and bad things" is a more real-world reflective approach than an absolute "AI is good" or "AI is bad", and at the least it's more conductive for civil discussion.
I prefer strong opinions than the academic conclusion that a thing has some good parts and some bad parts. I feel 99% of modern essays are afraid to take a stance about anything, and it makes for uninteresting reading and even less interesting discussion.
To be fair, my issue with the born-again AI skeptic genre of posts is that it's basically clickbait. As if being a skeptic at one point makes your argument stronger, proving that the hype is real, and one should pay attention. It's intellectually dishonest, even if meant in earnest.
(Your post history shows that you have been anything but an AI skeptic. Case in point about intellectual dishonesty.)
"Asbestos has good and bad things"
"Assault rifles in the hands of ordinary citizens has good and bad things"
"Everyday chemicals in the food supply has good and bad things"
Look, some issues require nuance. Others don't. It's gaslighting to tell activists who consider Big AI to be a net negative for society (by an order of magnitude!) that their position isn't "real-world reflective".
See dang’s comments on https://news.ycombinator.com/item?id=47340079 . (That link itself is a submission about HN’s recent guidelines changes to include “Don't post generated comments or AI-edited comments. HN is for conversation between humans.”)
Maybe this was a genius move made precisely to be ambiguous on whether it was April Fools or not... so that the author can later read the room and clarify whether it was or was not April Fools, without much repercussion either way.
This is to be expected. There's a definite split in the engineering community between those who are embracing AI, and those who are rejecting it. It's now become political, like systemd and wayland.
Even people who are actively embracing it don't want to have 95% of all submissions in most dev-related subs be LLM-adjancent. There are separate subreddits for that, just like there are subreddits about MacOS and Linux specifically, despite a huge number of devs using those OSes.
Also, most discussions about AI / LLMs on career or general programming subreddits are not what I would call productive. I _want_ new useful information about this topic, but I know I won't get it as things are right now.
I had almost forgotten about that subreddit. Sadly it has been in a zombie state for years now. Despite having millions of members you can hardly find even 100+ comments on any post in the front page.
Last time I checked only political posts (like related to offshore programmers) got any kind of attention. Most technical posts barely gets 10 comments. Some of the smaller subreddits (like /r/ProgrammingLanguages) are much better.
It's badly moderated (not enough mod resources or something). There is essentially only one mod. Bad comments have a 99.9% chance of not being moderated out at all, and that killed my interest in participation
If you enjoy comedy, you should check the status of subreddits like /r/selfhosted or /r/homelab, etc. I find them interesting because they are on the edge of computers pro-users and software developers. Used to be a nice community
Now it’s people sharing AI apps that look exactly like other AI apps that they have never heard of [1]
Project rise then implode hilariously in a month [2]
An ebook management project that grew over a year with pretty conservative feature set, then in 3 months implements every ebook feature under the sun, breaks every thing, then implodes. Funniest thing is when the “AI Slop” callout is itself AI written and no body notices. [3]
Like… amazing comedy. Then after the owner deletes the repo, 10 people have to role-play the hero who “has the code” because clicking Fork on GitHub is the sign of a true hacker.
Wow that's lovely. Wish we could do that on HN for a bit.
(Yes, I know, I can install an extension or something to hide LLM/AI submissions. I don't want to, and that's not the same thing, and won't have the same effect.)
I use LLMs, I think they are useful, but oh my sweet jesus I am so tired of reading and hearing about them everywhere.
We also believe that, generally, the community have been indicating that, by and large, they aren't interested in this content.
How can that be true? Reddit is vote-based. So if people weren't interested, they wouldn't vote it up and it wouldn't appear on the front page. Hacker News has no rule banning posts about Barbie and yet, amazingly, Barbie rarely makes it to the front page, because that's how upvotes work.
People clearly are interested enough to vote LLM related posts up, but a bunch of mods who don't like AI are upset enough to want to dictate what others can find interesting. Which is not unusual for Reddit.
Unlike Hacker News, Reddit's new Best algorithm often surfaces newly posted post (which is a good idea that help mitigate the cold start problem), but that means people who are subscribed to /r/programming will see posts about LLMs and typically downvote them.
From the user responses to the linked ban, said ban was a positive decision for that community.
The takes on LLM programming on reddit are hilarious and borderline sad. It's way past the point of denial, now into delusions.
They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.
I have been discovering/enjoying the 'smol' web, unironically.
Hmm, there's a site I wanted to share with you but I can't find it atm., it's a directory of personal websites sorted by topic. It pops in here from time to time.
But hasnt it gone down in quality with broader mainstream appeal, more ai slop, and just general self promotion? I feel like a lot of niche communities have also lost their core or original user bases that are not as active any more or it could just be me? For example off the top of my head not digging too deep, r/juststart used to be very high signal and strongly moderated but now not so much. But, on the other hand, i did discover r/laundry recently with some awesome content around “spa day” but again thats mainly one user responsible. I guess another big gripe is having to use the reddit mobile app after they closed their api’s and shutting down third party apps because now i cant browse its more feed-like. Sorry for the ramble not sure what my point is but hoping others can share their experiences and any advice too i guess
I'm not sure how you can claim that going more mainstream would decrease the quality of a site that gave us "we did it reddit!" and "the bacon narwhals at midnight".
You think this place, the people in my circles infamously refer to as the "orange site", is considered a bastion of good conversation among the people that don't frequent it?
Makes sense. If I'm looking to read discussions about stables selection, feed prices, etc, why would discussions of spark plugs be relevant?
> /r/assembly bans all discussion of 4GL
Also makes sense; people wanting to discuss register allocation, bit twiddling, etc probably aren't interested in insurance claims taxonomies or similar.
> LLM programming isn't going away by not talking about it.
Right, but is the context still /r/programming? After all, there are tons of subreddits you can go to to discuss LLM programming. Why do you need to shove it into a space created for human thoughts on programming?
> It's time to move on, and eventually considering farming.
Okay, understood, but my question still stands - why conflate programming with viber-coding?
Reddit is doomed anyway. People are using AI to start threads, and other people are using AI to comment on these threads. You can never know what you're interacting with.
Worse, I am repeatedly being accused nowadays of being an LLM. It probably doesn’t help that I riff-write with only a rough outline of what I want to say, not how to say it.
If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
> If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
I got a pretty interesting ai detection demo run through historic HN data, would love to see if HN could make use of this tech for HN. But i have no clue how to reach out or who might be a contact here.
Make it into a browser extension. Or, honestly, just a page with outputs and a tip jar. If there are interesting findings, highlight them-maybe blog about it and post that.
I gave up on r/programming after an article I wrote (thoughtfully, without AI, even though the content might not have been super interesting) got mod-slapped with a stickied comment "This content is low quality, stolen, blogspam, or clearly AI generated".
Ironically, that comment was added three months after I posted the article, when it was nowhere near the front page anymore, in a clearly automated and AI-driven review.
Still salty about it.
Reddit is a low-quality platform, the sorts of people who would be interested in moderating a popular subreddit like r/programming are even less fit to be moderators than the average moderator is. It would be better if people completely stopped using the platform.
It's useful for syndicating your self-hosted blog content so that when people Google topics your stuff shows up.
I generally fire-and-forget on Reddit, except for more niche communities where sometimes domain experts actually comment
Do you think maybe it's the disclosure about self promotion you added? You explicitly say the purpose of the blog post is to promote your consultancy, so that might be why they marked it blogspam. I know it feels like you're being forthright, but really that you're promoting yourself is implicit in the fact it's a personal blog, so you can leave that out and still be honest.
Yeah, maybe it's that, though I still wouldn't expect someone to categorize the post as blogspam, even if they just glance at it. (At least according to my definition of blogspam, but I guess each has their own.) But yes, pragmatically I should probably remove the disclaimer.
Moderators on a power trip? Fuck 'em.
I gave up on Reddit after many years of acting as characters on the Venture Bros subreddit. Every so often I would retire my account and begin a new one. I've had MANY over the years. I've used Reddit "cleanup" apps to remove/clean the content I've created. Good stuff over time, very niche and specific to VB.
I gave it all up when Reddit started recycling my old accounts and reposting my content as if it were new -- but not authored by me, just regurgitated back onto the site.
If that happened to me, you can bet it's happening en masse. Which indicates to me that the site is really dead.
Good decision.
AI programming is fundamentally different from programming and as such the discussions merit to have separate forums.
If r/programming wants to be the one solely focusing on programming then power to them. Discussing both in combination also makes sense, but the value of reddit is having a subreddit for anything and “just programming” should be on the list.
> AI programming is fundamentally different from programming
It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.
Hopefully this does not devolve into ‘nuh-uh’-‘it is too’ but I disagree.
Even putting aside the AI engineering part where you use a model as a brick in your program.
Classic programming is based on assumption that there is a formal strict input language. When programming I think in that language, I hold the data structures and connections in my head. When debugging I have intuition on what is going on because I know how the code works.
When working on somebody else’s code base I bisect, I try to find the abstractions.
When coding with AI this does not happen. I can check the code it outputs but the speed and quantity does not permit the same level of understanding unless I eschew all benefits of using AI.
When coding with AI I think about the context, the spec, the general shape of the code. When the code doesn’t build or crashes the first reflex is not to look at the code. It’s prompting AI to figure it out.
This is the same argument that people used to have against compilers
It is not. One version of a compiler on one platform transforms a specific input into an exact and predictable artefact.
A compiler will tell you what is wrong. On top of that the intent is 100% preserved even when it is wrong.
An LLM will transform an arbitrarily vague input into an output. Adding more specification may or may not change the output.
There is a fundamental difference between asking for “make me a server in go that answers with the current time on port 80” and actually writing out the code where you _have to_ make all decisions such as “wait in what format” beforehand. (And using the defaults is also making a decision - because there are defaults)
Compilers have undefined behaviour. UB exists in well defined places.
Even a 100% perfect LLM that never makes mistakes has, by definition, UB everywhere when spec lacks.
Right, they allow for the idea of gradual specification - you can write in broad strokes where you don't care about the details, and in fine detail when you do. Whether the LLM followed the spec or not is mostly down to having the right tooling.
Compilers are an abstraction. AI coding is not an abstraction by any reasonable definition.
You're only thinking that because we're mostly still at the imperative, REPL stage.
We're telling them what to do in a loop. Instead we should be declaring what we want to be true.
The value is in the imperative, the computer does what you tell it do, The control is very powerful and is arguably a major reason computer technology is as power and popular as it is today. Bits don't generally speaking argue with you the same way analog programming if by electronics or mechanical means did before the transistor.
You can certainly write in imperative or functional but you are still telling the computer what you want. LLM use impercise language can generate loose binding the actual reality people one. They have there use cases too but they have a radically different locus of control. Compilers don't ask you to give up percision either they will do what you tell them to do. AI can do whatever it thinks is the most likely next token which is foundationally different from what we do when we engage in programming or writing in general
You’re describing a hypothetical that doesn’t exist. Even if we assume it will exist someday we can’t reasonably compare it to what exists today.
It exists today, please message me if you’d like to try it
It very much is. It’s more like telling an intern what to do and then reviewing their code. Anyone can do it, and it results in (mostly) slop.
>But most people are not vibe coding outside of pet projects, at least yet.
Major corporations have had outages thanks to AI slop code. Lol the idea that people aren't vibe coding outside of pet projects is hilarious.
The idea that everyone using LLMs is vibe coding is equally hilarious.
If you use an LLM to generate source code you are vibecoding.
You specify the problem in natural language (the vibes) and the LLM spits out source (the code).
Whether you review it or not, that is vibecoding. You did not go through the rigor of translating the requirements to a programming language, you had a nondeterministic black box generate something in the rough general vicinity of the prompt.
Are people seriously trying to redefine what vibecoding is?
> If you use an LLM to generate source code you are vibecoding
No, you're not.
> Are people seriously trying to redefine what vibecoding is?
Yes, you are.
No, that is literally vibecoding. Reviewing vibecoded source is just an extra step. It's like saying "I'm not power toolgardening, I use a pair of gardening scissors afterwards." You still did power tool gardening.
As additional proof, the dictionary definition of vibe coding is "the use of artificial intelligence prompted by natural language to assist with the writing of computer code" [1]
It seems like vibecoders don't like the label and are retconning the term.
[1] https://www.collinsdictionary.com/dictionary/english/vibe-co...
Both you and the Collins dictionary (merely one dictionary, not an absolute anuthority) are retconning. “Vibe coding”, as originally coined in this tweet, means something more specific: to generate code with LLMs and not really look at the output. The term itself suggests this too: reviewing code is not exactly a vibes-based activity, is it?
https://xcancel.com/karpathy/status/1886192184808149383
Here's Merriam Webster with the same definition: https://www.merriam-webster.com/dictionary/vibe%20coding
That tweet coins the term, we agree there. The activity it describes is using natural language to generate software. Whether you add a review process or not doesn't substantially change that. Sure, Karpathy says he doesn't "read the diffs anymore". Why does he say "anymore"? Clearly he was reading them at some point. If not reading any diffs was a core part of the activity, that wouldn't be the case, the tweet itself clearly outlines that as optional. He's clearly not talking about a core part of the activity.
I think the tweet is pretty clear on its intention for the definition and I’m not interested in arguing about it.
I do think the dictionary definitions, such as they are, are coming from a real place: some people do use the more general definition. And you seem to already know about both definitions. So why argue so belligerently and definitively in the first place? Parent comments you were replying to were obviously using the original definition. Talking about “retconning” is obviously silly given this timeline. Meaning in language is not a race to be the first to make it into a dictionary. It’s a very new phenomenon that new terms make it so quickly into a dictionary at all, and they’re always under review. So maybe factor that into your commentary?
Because I believe the broad definition is more widely used, I also don't think the narrow term is useful or meaningful, and I think it's being used purely by vibe coding practitioners who feel that the term has negative connotations.
This all started with the parent comment telling someone else (belligerently and definitively) using the broader definition that they were wrong.
The narrow term is very useful, there is obviously a world of difference between reviewing the output of an LLM and not - the latter is irresponsible. It shouldn’t be surprising that people bristle when being accused of it. It doesn’t make sense to accuse someone of redefining a term to make themselves feel better when the history of the term shows that yours is the redefinition. The simpler explanation is that the accused just doesn’t like being called irresponsible - not that they’re trying to defend LLM code generation from someone who doesn’t like it.
You're saying what I'm saying. They feel self conscious about the term "vibe coding".
And to be clear, nobody accused the people who lashed out here. They reacted to general statements that people are vibe coding.
I also don't understand why the term vibe coding couldn't contain a spectrum of responsible use. Just say you're reviewing your vibe coded commits!
Clearly the issue here is about how vibe coders perceive the term vibe coding. Some of them feel that it's demeaning and are trying to wiggle their way out of the label by arguing semantics.
No, people think it’s demeaning because they are using a different definition to you, the definition which was the original one. Don’t know how I can put it clearer.
Sure, but if r/programming can't include the combination/hybrid then the whole subreddit is likely doomed to obsolescence.
That genie's not going back into the lamp.
(Heck, I've leaned on LLMs to generate damned SwiftUI code for me.)
There are groups for carpenters who only use hand tools. Obsolete and existing.
And, arguably, still useful to all.
True, but carpenters using hand tools are a niche.
If you are implying that programmers who hand code are going the way of carpenters using hand tools, I think I can agree.
I do... but I also think all programmers need to know how to hand code, and all carpenters need to know how to use hand tools.
I agree also.
I mean, no, why would it be? There is so, so much to talk about in programming other than AI. Meanwhile, the current HN front page feels like 90% LLM spam: the complete antithesis of what I used to come here for.
I personally can’t wait for no-ai communities to proliferate.
Taking your estimate as a superlative, it would be asinine for the community here to censor AI-targeted discussions in the way I think you'd like to. The same goes for a programming community that censors discussions about LLM programming.
You are basically asking for a brain drain in a field that—like it or not—is going to be crucial in the future in spite of its obvious warts and poor implementation in the present. If that's what you want, be my guest and encourage it; but who's authorized to unilaterally make that decision in a given forum?
In the present case, the moderators for r/programming are. But they're making a mistake by marginalizing the technology that's redefining the practice because people talk about it too much instead of thinking about how to effectively talk about it and then steering the community in that direction.
But that's a full-time job. Which is why I think HN may turn out alright in the long run or a similar community will replace it if it fails to temper the change in the industry.
What this decisions signals to me is that r/programming has been inert for some time. I'm sure plenty of programmers, irrespective of their position on AI, probably find the community rejoicing in their resignation to the technologies influence as their cue to finally exit.
[dead]
Sheer nonsense. Handcoding is thriving and will easily survive long into the future, especially after the bubble bursts (which is already happening).
Amen. Just like hand washing clothes survived the washing machine fade.
Fr. Hand washing helps clothes last longer and you have circulation of fewer micro plastics.
Not to mention it's more resource efficient.
They're not banning "AI programming": just specifically large language models.
There can't be any interesting discussion about AI programming. Every conversation boils down to what skill files you use, or how Opus 4.6 compares to Codex, or how well you can manage 16 parallel agents.
There genuinely is a lot of interesting discussion to be had about LLMs, and I know this is true because I discuss things with my coworkers daily and learn a lot. I do admit that conversation online about LLMs is frequently lacking. I think it's a bit like politics - everyone has an opinion about it, so unfortunately online discourse devolves to the lowest common denominator. Hey guys, have you noticed that if you use LLMs frequently it's possible you'll forget to think critically?
But "there can't be any interesting discussion about AI programming" is completely false.
For me, almost every single time a conversation like this happens in real life it boils down to the one side claiming that "This is the future" and "Don't get left behind" followed by a torrent of hype and buzzwords. So no, there is no interesting conversations to be had about LLM programming anymore.
> "This is the future"
Yeah, that's silly, it is already the present!
Some interesting conversation one can have with coworkers specifically:
1. How should code review and responsibility for code be updated to a) increase velocity, b) keep quality and c) keep reviewers from burnout. There are plenty scenarios in which vibe coding a component in an afternoon is the correct choice, even if it is buggy, insecure, and no one really understands it.
2. Which parts of the codebase work well with code assistants, which don't? Why? What could be changed to make it easier? In my experience, Claude Code sometimes loses its mind on infra topics. It is also not very good at complex, interconnected services (humans aren't either).
3. Which tasks could be offloaded to agents to save everyone time and sanity? - Creatig Jira Tickets from meeting transcripts is an obvious one, collecting and curating bug reports another one.
4. How should we design systems to better work for coding agents? Does it influence our tech choices? Should it influence them?
5. Is AI a net positive or negative for security?
And so much more. The last topic in particular is incredibly important, and things are developing so fast that you can probably have a new conversation on it every two weeks.
Maybe you struggle to have good conversations because I just provided an anecdote and you immediately stated that my anecdote is false? If this is how you typically interact with people I’m not surprised you’re not having interesting conversations.
He didn't state your anecdote is false, the first two words of his comment are "for me". That means in his experience, not yours.
Ironically, in your crusade to wave the "I'm being censored!" flag, the only person who is trying to do any censoring... is you!
And to top it off, as if that wasn't enough, you're also incredibly snarky and basically implying that this person who just vaguely disagrees with you must be unlikable or something. Which, in another twist of irony, actually makes YOU appear unlikable, because what well-adjusted adult would feel the need to throw someone under the bus for slightly disagreeing with them?
The problem is that mrcsharp added the last sentence: "So no, there is no interesting conversations to be had about LLM programming anymore." So they are definitely trying to turn their anecdote into a universal.
Most of your comments about johnfn are still apropos, though...
IMO since it started as clearly an opinion/anecdote, that last part should be taken to mean in that context, not universally.
He said:
> "there is no interesting conversations to be had about LLM programming anymore"
The first sentence was in his experience, but this is a universal assertion. He is claiming no one, in the world, is having interesting conversations about LLMs.
Is my response really so off-base? Imagine you say "I like Rust because it made my app go fast" and someone replies "There is no one who has used Rust to improve performance." Do you really think that's a normal way to respond to someone sharing an anecdote?
But that's not how he responded because there's a whole ass comment before that.
Okay sure, if you read the comment and then use a Men in Black mind wiper thingy before reading the last sentence, then it might seem brazen or universal. But that's not what you did.
The only way that last comment can reasonably be taken to mean "for everyone on Earth" is if you did not read the lines before it. Because, in that context, to me, it's clear he is only talking about his experience.
This is a phenomena I've noticed lately where everyone feels the need to add a disclaimer for everything and not doing so is seen as an "aha gotcha!" type thing. But we're not algorithms. You do not read one line at a time and then digest it.
You're human, he's human, and there's context. You know that it would be extremely unreasonable for someone to think that nobody, anywhere, has anything to say about LLMs right? Okay. That doesn't mean that this person is being unreasonable.
It means that that's probably not what he meant.
Read his comment in context. The comment thread, condensed, is:
mudkipdev says "There can't be any interesting discussion about AI programming". We agree that is a universal claim. (Right? I mean, your comment says "You know that it would be extremely unreasonable for someone to think that nobody, anywhere, has anything to say about LLMs right" but isn't this a clear example?)
I say "There can be". We agree that is anecdotal.
mrcsharp says "For me, [stuff that supports mudkipdev]. Therefore, there is no interesting discussion".
He is re-asserting mudkipdev's point. mudkipdev says A, I say !A, he says, actually, A.
Your interpretation has him read the back-and-forth between me and mudkipdev, and respond to "A", "!A" with "B". If you only read mrcsharp and nothing else in the thread I can understand this reading, but the context changes things.
Brilliantly mirrored! Unfortunately there are far more people like this than i would have ever imagined pre ai.
My pet peeve with all LLM discourse is whenever someone mentions any problem they experience with LLMs or any mistake they make, someone comments that humans make the same mistake.
And the difference is that humans will learn not to make that mistake anymore.
That's very optimistic.
Thankfully humans have only been around for about 20 hours!
I disagree and you could reduce basically anything to this: 'there can‘t be any interesting discussion about React. Every conversation boils down to which framework you use or how you manage state or whether you use typescript or javascript‘
All of those are opinions about programming. Which framework, which language, etc.
Conversations about which model to use aren’t conversations about programming.
A better analogy would be some topic that you can’t discuss without it boiling down to which text editor you should use. It’s related to programming, a little. But it’s not programming.
That is exactly why I left reddit. r/javascript had almost completely abandoned JavaScript discussions for React and Angular while r/programming was half filled with irrational JavaScript fear nonsense.
That isn't why /r/programming banned it. They banned it because every discussion about LLMs inevitably devolves into discussions about AI slop in varying levels of civility, and the rare good LLM submissions/discussions do not offset it.
Other tech-adjacent subreddits such as /r/rust have banned LLM discussion for similar, more pragmatic reasons.
so like the past? how emacs compares to vim? how java compares to javascript? how a true programmer can read binary files without a blink?
Genuine question: how to distinguish yourself from the stream of slop?
I am also annoyed by the endless stream of articles and projects related to LLM-assisted coding. Not because I dislike LLM-assisted coding as an idea, but because it's all more of the same (as you said). I think that there are still a lot of low-hanging fruit in improving LLM harnesses that no one is working on because everyone seems to be chasing the latest trends ("agentic", "multiagentic", "skills") without thinking bigger.
But I'm afraid that if I finally invest time and implement some of my ideas on making LLM-assisted coding better (reliable, safer, easier for humans to interpret and understand generated code), I won't be able to gather any feedback. People will simply dismiss it as "yet another slop for creating more slop" and that's it.
What is the way out of this conundrum?
This is far too negative and reductionist
Like saying theres no interesting discussions about programming. Just whether OOP is overhyped, python is slow, how well you can convert a c codebase to rust
You have not seen my recent WhatsApp chats. Me and a pal are talking about what we're doing with Claude code, and it's quite interesting!
Just like discussions about traditional programming never were only about syntax and type systems, AI discussions aren't only about prompts and harnesses. I find there's quite a bit of overlap actually! "How do you approach this problem?" Is a question that is valid in both discussions, for example.
In my experience Opus 4.6 is the best.
> or how well you can manage 16 parallel agents.
Claude does that for me. :)
Seems a lot of commenters here dislike their decision, I like it though. LLM-generated projects, articles, blogs are low-effort products lacking authenticity.
And the discussion on LLM itself can in the long run be fairly tiring, follow r/LocalLLaMA for a while and you'll see what I mean. But if you are really into LLMs though, that sub is great.
It is simply not fun to go on to a subreddit, seeing 90% being projects and blogs that is obviously created using AI, and authentic content being pushed to the side due to the high volume of artificial works. r/Python was horrible at one point, but the mods have been stepping up their game.
> LLM-generated projects, articles, blogs are low-effort products lacking authenticity.
I think this is mostly true but not completely true, LLMs are a tool and right now we are learning how to use it, how to use it well and more importantly how not to use them.
I created an account and started reading this site primarily for programming news when r/programming took a precipitous dive in quality around 2020 or so. Before it was an example of one of the few good communities there, but it quickly became show and tell (ironically this was against its unenforced rules). And any real interesting posts had no discussion. But then I noticed the "Other Communities" tab would show posts from a HN posts sub that tracked posts here, and suddenly I was able to get great information. A post about CockroachDB that had 20 boorish comments complaining about its name over there would have the designer of it over here answering technical questions about its capabilities.
THAT SAID, I think this might be what gets me to go back to that place. I used to come here to read about new Python tooling, latest database development news, interesting thinkpieces on development practices, etc. Now it's dominated by AI evangelism, "I'm Showing HN™ What I Used By Claude Tokens On :)", AI complaining, AI agent strategies, AI's impacts on the industry news, etc. There are some non-AI posts but not as many good ones as there used to be, and a lot of the non-AI posts quickly turn out to be AI written. Because they respect their time as a writer greatly and my time as a reader not at all. It's ClankerNews, the Hackers are in short supply.
There’s something off about Reddit. Either I grew up or it became hollow from within. Just angry people scolding each other all the time.
There are some true gems however but usually in smaller focused subreddits.
Yeah, the smaller subreddits are good. The problem is it’s basically killed off alternative forums.
I never thought I’d miss vBulletin so much.
somethingaweful forums are still very much alive
Think any platform becomes terrible over time once it hits a certain level of mass appeal. I loved Reddit and Quora in 2010.
They switched their best sorting algorithm to be engagement based rather than upvote based [1]. Upvotes are just one of many metrics, but heavy comment interaction is another. It incentivizes rage bait and performing for the crowd with every comment and post. They also switched into an almost purely moderator curated frontpage [2] rather than allowing users to vote.
1: https://www.reddit.com/r/blog/comments/o5tjcn/evolving_the_b...
2: https://news.ycombinator.com/item?id=36040282
One of most magical things about HN is that heavy comments are punished as a negative signal, not rewarded as a positive one.
I've wondered the same thing, but you growing up definitely has to be a factor.
> Just angry people scolding each other all the time.
This really does describe it perfectly. I don't know about others, but focusing on my career pulled me out of a relatively low-income and dysfunctional environment. Reddit too often reminds me of people I used to know in real life.
It's been so many years since then, and finding and living a better life was so intertwined with my young adulthood that I almost convinced myself people like that don't exist in real life anymore. I thought the whole world had moved on, but search results nowadays prioritize Reddit enough that I'm routinely proven wrong.
Contrary to popular belief, I don't think most of the stuff on there is fake. Those people probably really are like that. Certain ways of thinking can become so normalized that they don't even see what there is to be ashamed about. What I sense the most on there is a lot of stress and the resulting irrational fears that pour out of people when they feel too much pressure. People under a seemingly endless and vague threat will go a little nuts and start to swat at anything that disturbs their worldview.
Reddit is still a step above other alternatives.
A good test for any community is: try posting that is factually incorrect but that supports the agenda of the community. Does the community call it out? In Reddit it does happen.
In my experience, that kind of thing might only get called out by moderators or the outliers who reply the most. They're the ones with the strongest interest in proving anything. Only then will the rest of the community dogpile. Otherwise, it goes ignored.
You should try Lemmy. It feels a lot like Reddit did in like 2012. Small, but a great community.
It was inevitable given it's a top 7 most popular site.
The reality is, the masses, the real world, the average person. Is an asshole.
It doesn't reflect in the real world, because people learn to hide their assholeness at a very early age (Or they learn how to get punched in the face).
On an anonymous forum. You don't have to hide your assholeness.
Frankly it's amazing the site never devolved into 4chan. I attribute that to all the people doing free labor --> mods.
While that may be true, there's a certain personality attached to reddit power users that is particularly odious.
You mind showing me an example of a "power user".
I've never really paid attention to usernames. Like I'll notice someone's name, out of familiarity.
But if I noticed you, it's because I liked what you've written in the past.
If I hated someone's writing... I wouldn't think about them at all.
Reddit turned way more into an echo chamber over time. The moderators and the downvote system destroyed the site. The shift from free speech, libertarian and anarchist ideology into heavily left leaning definitely didn't help.
I know this snarky, I'm sorry ahead of time. But I don't know how else to make this point...
The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
We are SO past the point of software being developed without LLMs at _all_, the trend line is never going to reverse. I don't understand the people digging in as zero LLM absolutists.
I use LLMs yet I don't care to read about them or their usage at all. I can certainly see the reason why a place called "/r/programming" wouldn't want to have discussion about agent usage either, since it's not programming, it's a different activity.
Yeah I totally get the rule. I use LLMs when developing. In fact, I've been out of Claude tokens for the week since Wednesday, but I use Claude specifically for the boring, simple stuff I don't really want to do, but that Claude can. I'm simply not interested in discussing anything LLMs are able to do, it's not interesting.
It makes sense that a programming subreddit first and foremost discusses programming (the skill). We can go complain about Claude somewhere else if we want to.
Following up, anecdotally, people I talk to who are excited about LLM development usually either care more about product development, or don't have programming skill enough to see how bad the software is. Nothing wrong with either, but it can get tiresome.
> people I talk to who are excited about LLM development usually either care more about product development
This is an interesting thing I've also noticed in public hobbyist forums/discussion spaces where someone who is more interested in making a "product" clashes with people who are just there to talk about the activity itself. It's unfortunate that it happens but it will self-correct over time (like /r/programming here) and the LLM enthusiasts of Reddit will find another place to discuss ways of using them.
If you use them, you should care because there is always something one can learn about things we use.
This seems to be a matter of industry and a perspective, like saying everyone loves chocolate.
https://en.wikipedia.org/wiki/Argumentum_ad_populum
I have no reason to believe AI is used as much or as widely as you claim.
In my industry AI is fully available and was almost forced on us, and yet nobody is using it. The process of using AI and then scrutinizing the output is just more work than doing it manually. The most I encounter AI is when running job interviews and watching candidates read AI generated answers off a screen.
My industry also tends to skew much older than where I came from previously writing JavaScript full time. We are also fully remote with lots of status meetings. If I were less confident in my ability to communicate in writing maybe I would be more inclined to use AI.
I also don’t really envision AI accomplishing my multitude of daily managerial and administrative assignments that I have in addition to agile stories and writing code. Comparatively, writing code is the trivial part.
I think they just don't want every post to be about llm, vibe coding, harness and if claude is down.
Some sub reddits forbid memes, because else they get flooded and the good content drowns in it.
Some sub reddits only allow certain content of certain days to counter this.
What do you want to mods to do?
It may not be a, in denial, hiding their heads in the sand situation.
Sometimes a topic gets too popular, it drowns out all the other topics. At that point, aren't they just a glorified version of r/llm?
I'll give you one personal example:
The year Caitlin Clark was drafted to the wnba.
r/wnba went from a subreddit of 9000, to eventually 200k subs.
We were bombarded with CC posts every hour.
- Some of it was trolls staging a race war (this was during US elections).
- Some of it was genuine CC fans, who wanted to talk about CC.
- Some of it was bball nerds, who you know... wanted to talk about a bball player in a bball forum (regardless of who that bball player happens to be).
So what happened was, at any given day, 80% of the front page was CC content.
At that point, we might as well have been r/caitlinclark.
So the mods did something drastic and controversial. They banned all "low effort" CC content.
WTF does "low effort" mean? It pretty much meant 99% of CC posts got removed.
The forum went back to something that resembled a bball forum. That talked about other players. And other teams. Not just Caitlin Clark.
If you can possibly believe it, there are millions of programmers who have never and will never touch <your favorite tool>
Python, cmake, bash, perl, every conceivable tool or language, there's millions of people in the industry who will never touch them.
This might be a wild concept, so make sure you're sitting down: the field of software engineering is unfathomably larger than your personal, extremely narrow, viewpoint.
I have yet to run into any serious project in the wild that is using LLMs for development. I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
I'm sure your experience is different, but you can't _seriously_ claim we're "past the point" of not using LLMs for programming.
Vibecoding is a fundamentally different kind of activity than actual programming. It's a pure delusional dopamine rush, compared to the deliberate engineering required to build quality software.
What do you consider "serious", as that seems to be the main differentiator here. I know plenty of serious (multiple years of development and users, and began prior to LLMs) projects that have devs using LLMs for development.
That makes me curious, would you like to list them?
They aren't anything particularly special, most projects I interact with have at least 1 dev who uses LLMs in some capacity.
The two I was thinking of when I posted are the Minecraft modpack GregTech: New Horizons, and an Old School Runescape plugin.
https://github.com/gtnewhorizons
https://github.com/osrs-reldo/tasks-tracker-plugin
I see. I would not consider these to be serious projects.
I'm sure they give people joy, and contain some decent problem solving, but they are both small extensions to existing software, and are not really solving interesting new problems or creating their own systems.
The first one is hardly a small extension. It's millions of lines of code, which is multiple times more than the version of Minecraft it extends.
The size of the project is not what matters. It's the systems it includes and how complex they are.
Actually, that raises a question: are the more complex systems in this collection of projects vibecoded or programmed by a human? Which parts are you aware of that invlude LLM output?
How can you still can't distinguish between using LLMs as tools and a non technical person vibe coding? I have yet to run into any serious software engineer that had to dive into a legacy codebase or an unknown tech stack and found no value in e.g. Claude Code for general understanding and refactoring. Not even talking about coding, just the capacity of generating custom contextualised documentation and examples tailored to your constraints and skills on the fly is ridiculously helpful.
The tool being useful sometimes does not support the statement that we are "past the point" of not using LLMs.
For CRUD apps though, the intern closing the ticket literally 30 minutes after it's created is really hard to battle against. Especially when those tickets were created by suits.
I generally agree that while I think vibe-coding is here to stay, it's different from designing useful products and systems, and I don't know how to convince colleagues that we should uhh be careful about all this code we're pushing. I fear all they see is the guy aging out.
Ok well I have plenty of serious, production-level professional experience that says otherwise. Not “vibe coding” - we certainly review the code. It’s a tool that has downsides and failure modes, of course, but it’s at the point where it’s definitely speeding us up and we are using it a lot. Trust me, I’d prefer a world, on balance, where this wasn’t true – I don’t like many of the aspects and uses of the technology – but its utility in programming is undeniable now and the capitalists aren’t taking “no” for an answer.
I'm curious, where are you using the tools? What programming language? what domain? Are you willing to share the projects you're working on?
TypeScript and Go on a 1.5 million SLOC production codebase; a complex SaaS tool for financial planning and analysis. Quite far from being “just CRUD”. Before Anthropic Opus 4.5 I was trying out Claude Code and wasn’t all that impressed, but since then it’s definitely helped. The project I wrapped up before Christmas would have gone into the new year without it. You’ve still got to keep a close eye on it; whenever I’ve got lazy with review, trusting it too much, I’ve always regretted it. It’s never one-shotted anything, even with plan mode and all that. I’m a natural skeptic on this stuff and was very actually skeptical for most of last year. But I’m very confident there’s a large net productivity gain now.
So is this a web app with a go backend?
I don't know where you got the "just CRUD" quote from. I never mentioned CRUD. But this sure sounds like it would be CRUD with some additional models in the backend.
What makes it not just CRUD? Is it using some complex model for forecasting?
I wasn't quoting you, they are scare quotes. It is a web app with a TypeScript and Go backend, but to call it a CRUD app would be misleading (despite the fact that yes once you boil it all down, everything is a CRUD operation) because it's a complex and flexible web app. It's the kind of thing that a bunch of people would prefer to be a native app; it's spreadsheet-like.
> I have seen vibecoded intern prototypes that took half a day to vet and dismiss because they were completely useless.
They weren't useless, they proved if the direction that the prototype was exploring was worthwhile. I've personally made many completely shit code prototypes in the years before we had LLM's, of course they weren't magically production ready, that's not the point of a prototype.
The example I'm thinking of did not. It was just the dumbest way to execute an idea we knew was easy to execute (scanning a parameter space).
> I have yet to run into any serious project in the wild that is using LLMs for development.
How about Claude Code? 100% of it was vibe-coded according to its creator.[1] Google and Microsoft also claim a lot of their internal code is AI-generated now. [2] [3]
Naturally, none of the big tech companies will just release a pure vibe-coded project due to structural reasons, but you also _seriously_ can't claim that serious projects don't use LLMs as well these days. Maybe in your limited experience, it isn't true, but that doesn't generalize to what's actually happening.
1. https://www.reddit.com/r/Anthropic/comments/1pzi9hm/claude_c...
2. https://fortune.com/2024/10/30/googles-code-ai-sundar-pichai...
3. https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...
> How about Claude Code? 100% of it was vibe-coded according to its creator.
Agents are trivial to make. I don't know whether that means they're not "serious", but it's exactly the type of thing you can make yourself in a very short time, and exactly the type of thing even LLMs can't fuck up too bad.
With regards to the overall point, I think the existence of projects using LLMs to do development doesn't really lend credence to the idea that they're somehow preferable or desirable. People tend to use hyped things, imagine they're useful even when presented with evidence to the contrary, and generally be very resistant to sobering realities.
It took years before people stopped running hadoop clusters to do things that a single linux box could get done 10x faster with some basic pipes. I'm sure there are still people who have "serverless backends" that work terribly in every regard in comparison to literally just a linux VM somewhere. People in software development tend to find these types of things every once in a while and adopt them wholesale.
This cycle is helped by the fact that the field has been growing constantly and a lot of the adoption comes from kids who don't know any better. Every piece of shit technology that comes and goes has meat for the grinder coming straight out of university.
Would I put LLMs in the same category as these previous (nearly) useless things? Probably not... But you should never trust peoples' perception of usefulness when it comes to almost anything in software development.
I don't know if Claude Code is actually a great example. If you have used it for longer periods of time, you will have noticed how insanely buggy it is. And for every bug that they finally fix, there seems to be a new one introduced. I don't even mind vibecoding. I have vibecoded a couple of tools that me and my coworkers use every day to make our lives easier but I'm not going to pretend they are anywhere close to something that I would release to the public.
It’s juvenile to consider all LLM assisted coding as vibecoding. I’m not going to expand here because this topic is about as much fun to discuss as politics, but coding assistant tools are just tools.
If you give a regular person a race car, they will crash it about as fast as their vibecoded app crashes. Give the same race car to a pro age it’s a different story.
I still think this was the right decision by the programming mods there. Talking about tools is pretty boring, and you need to train to use something like an LLM assistant. No one who can’t program a language should be using an LLM to learn it unless they know about 2-3 other languages already, IMO.
Nah I think it really is more nuanced than that. It is true that a non-technical person's vibe-coded side-hustle is completely different than how a professional developer may ship genAI code, but we're willfully glossing over the real problem that professionals are pushing out TONS of genAI code that's closer to vibes than it is to the pre-AI expectations on pushing to prod.
Devs that are pushing crappy code using these tools are
1) devs who would have pushed shut code anyways
2) getting stress from above to push code faster with AI
Good devs will learn to use these tools effectively.
lol yeah the people running the programming subreddit don't understand social context, who could have guessed?
> The fact that the people running r/progamming don't know not to wait until April 2 to publish this tells me that they don't have real-world experience in shipping software in a business environment.
I can't tell this is a joke or not. Do people really care? Or maybe the U.S. thing? At least in my country nobody cares....
I hate AI video, I hate AI art, but if you are pretending that AI isn’t going be writing code for 99% of projects going forward you are absolutely kidding yourself.
AI video and art is going to be increasingly used in advertising, news/reporting, games, etc. Therefore, you aren't allowed to hate it or even complain about it. Right?
AI will be writing the code for shit-slop apps and libraries. The good ones will be written by humans.
> We are SO past the point of software being developed without LLMs at _all_
That's exactly why I've given up on programming, development or career subreddits. There are a lot of interesting software engineering challenges opening up, but instead of discussing it like professionals it all gets drowned in a big negative mixture of rants against the financial AI bubble, companies using AI as an excuse to lay off, and a general antiwork vibe. All these subreddits have become feel good/bad echo chambers for angry teens and students with no real world professional experience.
So you really don't understand why people with real professional experience might be anxious now, and why there is an antiwork vibe? It's not just junior devs.
I understand why they might be anxious, but my point is it’s unrelated to the technology itself. Imagine people denying the internet works in 2000 because of the Dotcom bubble. Same with layoffs, they are not really due to AI (https://en.wikipedia.org/wiki/2026_United_States_corporate_m...), it’s a political discussion. And the antiwork vibe is not new, I have strong political convictions on how we should more equally redistribute capital gains so that even if AI was able to replace software engineers it would not be an issue but again that is political.
LLMs tooling brings a lot to senior devs. I have 15 YOE, I own a small agency, we are shipping faster, with less bugs, and believe it or not we are hiring, because we are able to take more work and grow, as it is logical without the political issues plaguing the US in particular. The market is already adjusting, hence why to me we are way past the point of developing professionally without LLMs.
So no I don’t get why the political topics are not discussed elsewhere and the irrational denial of the technology because of said political issues.
I know LLM generated code comes with it's own challenges, but the absolutists are definitely clinging to a time that has passed. I saw a recent discussion on Immich where a maintainer flatout denied a PR saying "That diff looks LLM-generated to me; is that indeed the case? If so, we'd prefer not to receive a PR for it" The PR was from a professional software engineer, who worked weeks of his free time on a big feature. Well structured + tested. Dismissed just because AI was used. https://github.com/immich-app/immich/discussions/23745#discu...
"Well structured + tested". Who would know? The diff is almost 200k changed lines. Good on them for saying no to this nonsense.
There's a good chance the actual needed implementation is less than 20k lines (I've found that LLM bloat grows exponentially), but even that's a stretch to review and accept wholesale.
I'm the person working on that fork. Yes, it has now diverged 200k+ lines, but half of that is specs, research and documentation and includes a month worth of work.
The comment in question was a small feature of about 1.5k lines changed and it was solidly tested.
Eh, fair enough. 1.5k is reasonable. Have you tried just writing it yourself instead? It's likely it'll be less than 1k lines and you should have no problems writing an implementation yourself if you understand the structure of the LLM version.
Why would I write it myself? I use Claude code 12 hours a day and I'm really confident with want I'm able to build with it. I use it at work with incredible results. Spec driven development with harnessing is super powerful, I'll never be writing large features hy hand again.
Look at what I'm able to achieve in my free time after work https://opennoodle.de
Would've never had the time without my army of agents
Heh, fair enough. To me this comes off as "I'm unable to write it myself [possibly because I've outsourced my thinking too much]", to be honest, but I'm not going to argue; you're the one who presumably wants this code to end up in that repository.
I wouldn't really consider (what is likely) sub-1kloc a "large feature", but to each their own.
I don't want it to end up in that repo anymore, hence the fork. I've got a growing community of people who have been eagerly awaiting this feature and a ton more that I built.
I definitely could write this by hand - the stuff I built in the last 10 years before LLMs was more complex than this - but theres no way I'm spending all my free time to slowly craft something if I can just use AI and get the same results much faster
> I don't understand the people digging in as zero LLM absolutists.
Relevant read: https://en.wikipedia.org/wiki/Luddite
I feel like it’s easy to understand what’s motivating these individuals to take that stance.
Definitely not the same. Luddites were fighting for humane working conditions; breaking machines was just a means to an end. They weren’t doing it because machines were the problem.
Anti AI crowd on the other hand just doesn’t like AI. A modern equivalent of a Luddite would be someone going on strike to protest firings.
You are being overly dismissive of a mindset you obviously don't understand. Of course being anti-AI is about decent living conditions for humans. Most of us don't believe in singularity or Matrix-style threats.
But current AI is actively destroying our breathable/livable planet by drawing unmatched quantities of resources (see also DRAM shortage, etc), all the while exploiting millions of non-union workers across the world (for classification/transcription/review), and all this for two goals:
1) try to replace human labor: problem is we know any extracted value (if at all) will benefit the bourgeoisie and will never be redistributed to the masses, because that's exactly what happened with the previous industrial revolutions (Asimov-style socialism is not exactly around the corner)
2) try to surveil everyone with cameras and microphones everywhere, and build armed (semi-)autonomous robots to guard our bourgeois masters and their data centers
There is nothing in this entire project that can be interpreted to benefit the workers. People opposing AI are just lucid about who that's benefiting, and in that sense the luddite comparison is very appropriate.
You have misinterpreted my comment. But I concede that I should have written it more clearly.
I divide anti-AI people into two groups. Those who don’t like AI because of what it is, and those who don’t like it because of its impact on society. Naturally there is an overlap.
Luddites were not opposed to the technology. So the comparison to them is only correct for the latter group.
Not talking about LLMs on a forum is not going to change anything in the grand scheme of things. It could be a protest, but I see it more (the feeling I get from the announcement) as a means to protect the forum from being overrun regardless whether AI is ultimately good or bad.
Also note that nowhere in my comment I have stated my position in this argument.
Sorry for misinterpreting your original comment.
I'm not really convinced there's people who don't like AI "because of what it is". I mean, because of what it is, beyond any social/political considerations.
The only case i know of that is when there was an open letter with Sam Altman and other AI investors calling out the existential danger of AI, which in my view was a way to divert the debate from political questions to hypothetical Matrix/Terminator questions about consciousness and singularity.
really? is it so hard to believe that people dislike AI because it is unreliable, can't be trusted, changes how we work with code, takes the fun out of coding?
i am not worried about social consequences. society can adapt.
i am also not worried about energy use. we have endless clean energy if we can figure out how to use it.
yes, i am worried about society choosing the wrong adaption. that is, i believe we should train everyone to be teachers, doctors scientists, and artists. the stuff that AI should not be doing. but i am not worried about using AI for automation, putting people out of jobs. if we give them the opportunity to learn new jobs and,
IF, AND ONLY IF, we get AI to do it's work with 100% reliability and accuracy.
only then AI will be useful. i have tons of software projects that i'd like to get done. but i can't trust AI to do them for me, because i would spend even more time to verify the results than i would to code it myself.
so yeah, i absolutely don't like AI for what it is, a tool with limited uses that requires me to work in a way i don't want, if i want to benefit from it.
Oh, thank you for clarifying! That is entirely believable, and i'm also one of these people then. I just didn't understand what you meant. I thought you meant people hated AI for being creepy alien tech from scifi movies, not for being unreliable, untrustworthy, etc...
Sorry again for the misunderstanding.
HN should also limit all these self-promoting AI posts.
Favourite genres of posts on HN in the past 2 years:
* “I am bullish about AI”
* “I am an AI skeptic, [long rambling], but overall, I am bullish about AI”
It’s amazing how even criticism of the technology somehow ends up being a hype post. At least there are still places on the Internet where we can have a serious discussion about the downsides.
As someone who wrote recently wrote the latter post (https://news.ycombinator.com/item?id=47183527), the more nuanced approach that "AI has good and bad things" is a more real-world reflective approach than an absolute "AI is good" or "AI is bad", and at the least it's more conductive for civil discussion.
I disagree, but I am the outlier here.
I prefer strong opinions than the academic conclusion that a thing has some good parts and some bad parts. I feel 99% of modern essays are afraid to take a stance about anything, and it makes for uninteresting reading and even less interesting discussion.
To be fair, my issue with the born-again AI skeptic genre of posts is that it's basically clickbait. As if being a skeptic at one point makes your argument stronger, proving that the hype is real, and one should pay attention. It's intellectually dishonest, even if meant in earnest.
(Your post history shows that you have been anything but an AI skeptic. Case in point about intellectual dishonesty.)
"Asbestos has good and bad things" "Assault rifles in the hands of ordinary citizens has good and bad things" "Everyday chemicals in the food supply has good and bad things"
Look, some issues require nuance. Others don't. It's gaslighting to tell activists who consider Big AI to be a net negative for society (by an order of magnitude!) that their position isn't "real-world reflective".
See dang’s comments on https://news.ycombinator.com/item?id=47340079 . (That link itself is a submission about HN’s recent guidelines changes to include “Don't post generated comments or AI-edited comments. HN is for conversation between humans.”)
I interpreted the GP as "personal blog posts about AI/LLMs", not LLM-generated comments.
dang’s comments in the link above address “Show HN” submissions. (That was my interpretation of “self-promoting AI posts”… :)
I've been hiding them all. Makes the front page look a lot better.
I can foresee new subreddit rule: 'Stop complaining "This is turning into the orange site". Just report the slop.'
How the tables turn.
That sounds absolutely amazing. I will reconsider creating a new account and using Reddit again after walking away about a decade ago.
I deleted my account a few years ago, I might actually create one now, it'll be preferable to HN if they stick with this new rule.
[dead]
Maybe this was a genius move made precisely to be ambiguous on whether it was April Fools or not... so that the author can later read the room and clarify whether it was or was not April Fools, without much repercussion either way.
Nope:
> Timing just worked out this way. New month, ideal timing for testing a new rule.
Or so one says. (Not necessarily saying that it was a bad decision.)
What evidence of malice supports your claims here?
(Evidence-free conspiracy theories are generally unwelcome at HN.)
Clankers outta here! Wish there was an HN toggle to enable hiding all LLM programming submissions.
This is to be expected. There's a definite split in the engineering community between those who are embracing AI, and those who are rejecting it. It's now become political, like systemd and wayland.
I have not met any engineer who is actually embracing LLMs. I have met managers and interns who are.
Even people who are actively embracing it don't want to have 95% of all submissions in most dev-related subs be LLM-adjancent. There are separate subreddits for that, just like there are subreddits about MacOS and Linux specifically, despite a huge number of devs using those OSes.
Also, most discussions about AI / LLMs on career or general programming subreddits are not what I would call productive. I _want_ new useful information about this topic, but I know I won't get it as things are right now.
[flagged]
I'm not posting LLM comments. Stop bothering me.
I had almost forgotten about that subreddit. Sadly it has been in a zombie state for years now. Despite having millions of members you can hardly find even 100+ comments on any post in the front page.
Last time I checked only political posts (like related to offshore programmers) got any kind of attention. Most technical posts barely gets 10 comments. Some of the smaller subreddits (like /r/ProgrammingLanguages) are much better.
It's badly moderated (not enough mod resources or something). There is essentially only one mod. Bad comments have a 99.9% chance of not being moderated out at all, and that killed my interest in participation
A question to people here: what’s a smallish community for tech with a slightly more serious level of discourse that this subreddit?
https://lobste.rs/
Can y’all give me an invite
Lobste.rs is against AI in a pretty fanatical way and most posts have little to no discussion, also good luck accessing them if you use Brave Browser
...Hacker News?
If you enjoy comedy, you should check the status of subreddits like /r/selfhosted or /r/homelab, etc. I find them interesting because they are on the edge of computers pro-users and software developers. Used to be a nice community
Now it’s people sharing AI apps that look exactly like other AI apps that they have never heard of [1]
Project rise then implode hilariously in a month [2]
An ebook management project that grew over a year with pretty conservative feature set, then in 3 months implements every ebook feature under the sun, breaks every thing, then implodes. Funniest thing is when the “AI Slop” callout is itself AI written and no body notices. [3]
Like… amazing comedy. Then after the owner deletes the repo, 10 people have to role-play the hero who “has the code” because clicking Fork on GitHub is the sign of a true hacker.
[1] https://old.reddit.com/r/selfhosted/comments/1r9s2rn/musicgr...
[2] https://old.reddit.com/r/selfhosted/comments/1rckopd/huntarr...
[3] https://old.reddit.com/r/selfhosted/comments/1rs275q/psa_thi...
Not to worry, the security problems were patched out (with the help of "grumpy-AI", of course): https://gitlab.com/g33kphr33k/musicgrabber/-/commit/a1cb0c0e...
Good for them. Keep your projects human made by adopting a good policy. I use this one:
https://sciactive.com/human-contribution-policy/
I think good content is generated by deep thought, not by repeating what an LLM says
IMHO, Mitchell Hashimoto[^1] is a good example for the community to learn how to cooperate with modern LLMs.
[^1]: https://github.com/mitchellh
Wow that's lovely. Wish we could do that on HN for a bit.
(Yes, I know, I can install an extension or something to hide LLM/AI submissions. I don't want to, and that's not the same thing, and won't have the same effect.)
I use LLMs, I think they are useful, but oh my sweet jesus I am so tired of reading and hearing about them everywhere.
Strange, that they didn't provide a link to an LLM subreddit or the right place, where to discuss LLMs ...
As others have noticed in the thread, the timing is suspicious - could be April's fools.
The original post was edited with "this is not April Fool's"
One could argue that having an LLM churn out code is not programming.
Not a surprise, reddit userers are clueless.
We also believe that, generally, the community have been indicating that, by and large, they aren't interested in this content.
How can that be true? Reddit is vote-based. So if people weren't interested, they wouldn't vote it up and it wouldn't appear on the front page. Hacker News has no rule banning posts about Barbie and yet, amazingly, Barbie rarely makes it to the front page, because that's how upvotes work.
People clearly are interested enough to vote LLM related posts up, but a bunch of mods who don't like AI are upset enough to want to dictate what others can find interesting. Which is not unusual for Reddit.
Unlike Hacker News, Reddit's new Best algorithm often surfaces newly posted post (which is a good idea that help mitigate the cold start problem), but that means people who are subscribed to /r/programming will see posts about LLMs and typically downvote them.
From the user responses to the linked ban, said ban was a positive decision for that community.
If you rely on votes you get the lowest denominator posts only. See default subreddits. This is very well known failure mode.
Not being able to discuss the biggest change to our job in living memory is such a reddit thing to do, just sticking their heads in the sand.
[dead]
I think they are just too lazy to mod slop/flamewars posts and comments about LLM .
Which is fair but just be honest about it.
/r/programming was already unappealing because they tend to be late to surface interesting content in comparison to HN.
Sweet, so the LLM can interact on topics not about LLM
> Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.
If only, just this once, it were true. Sigh.
[dead]
The takes on LLM programming on reddit are hilarious and borderline sad. It's way past the point of denial, now into delusions.
They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.
People still use Reddit?
What do you recommend instead? Reddit is like reading YouTube comments nowadays, I miss when discussions were literate and informed.
I have been discovering/enjoying the 'smol' web, unironically.
Hmm, there's a site I wanted to share with you but I can't find it atm., it's a directory of personal websites sorted by topic. It pops in here from time to time.
I wouldn't call them 'people'.
Ignorance isn't bliss. They never had a downturn yoy in their userbase yet.
But hasnt it gone down in quality with broader mainstream appeal, more ai slop, and just general self promotion? I feel like a lot of niche communities have also lost their core or original user bases that are not as active any more or it could just be me? For example off the top of my head not digging too deep, r/juststart used to be very high signal and strongly moderated but now not so much. But, on the other hand, i did discover r/laundry recently with some awesome content around “spa day” but again thats mainly one user responsible. I guess another big gripe is having to use the reddit mobile app after they closed their api’s and shutting down third party apps because now i cant browse its more feed-like. Sorry for the ramble not sure what my point is but hoping others can share their experiences and any advice too i guess
I'm not sure how you can claim that going more mainstream would decrease the quality of a site that gave us "we did it reddit!" and "the bacon narwhals at midnight".
You think this place, the people in my circles infamously refer to as the "orange site", is considered a bastion of good conversation among the people that don't frequent it?
Do people use one of the most visited website in the world? Yes.
/r/horsecarriage bans all discussion of cars
/r/assembly bans all discussion of 4GL
LLM programming isn't going away by not talking about it. It's time to move on, and eventually considering farming.
> /r/horsecarriage bans all discussion of cars
Makes sense. If I'm looking to read discussions about stables selection, feed prices, etc, why would discussions of spark plugs be relevant?
> /r/assembly bans all discussion of 4GL
Also makes sense; people wanting to discuss register allocation, bit twiddling, etc probably aren't interested in insurance claims taxonomies or similar.
> LLM programming isn't going away by not talking about it.
Right, but is the context still /r/programming? After all, there are tons of subreddits you can go to to discuss LLM programming. Why do you need to shove it into a space created for human thoughts on programming?
> It's time to move on, and eventually considering farming.
Okay, understood, but my question still stands - why conflate programming with viber-coding?
/r/horsecarriages banning discussion of cars makes sense though. It's not a horse carriage. If you want to discuss cars, go to /r/cars.
It's not about wishing it goes away, it's that people don't want to see JavaScript/Java/Swift blog articles when they visit r/assembly.
More like /r/cars bans all discussion of electric cars.
OK I see your point, the problem is more being off-topic rather than the LLM programming itself. And that's correct, we are strict people, after all.
Reddit is doomed anyway. People are using AI to start threads, and other people are using AI to comment on these threads. You can never know what you're interacting with.
Do you think that this is not happening here?
Worse, I am repeatedly being accused nowadays of being an LLM. It probably doesn’t help that I riff-write with only a rough outline of what I want to say, not how to say it.
If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
> If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
I think you're proving OP's point.
Repeatedly, on HN? I couldn't find such comments in your history.
"your post was written by an LLM": https://news.ycombinator.com/item?id=47584052
"I hate this AI slop commenting fad": https://news.ycombinator.com/item?id=47385722
[flagged]
Got any proof?
Check a larger thread. It is pretty clear since there are people doing nothing to hide the writing style.
You are absoawesomeamazingaffirmativeabundantauthenticabsolutely right!
> Check a larger thread. It is pretty clear
It tends to get downvoted and flagged.
I got a pretty interesting ai detection demo run through historic HN data, would love to see if HN could make use of this tech for HN. But i have no clue how to reach out or who might be a contact here.
Make it into a browser extension. Or, honestly, just a page with outputs and a tip jar. If there are interesting findings, highlight them-maybe blog about it and post that.
If you email comment links to the mods that you believe are AI-assisted, they’ll review and act on that. Footer contact link. It’s not hopeless.