Stoat removes all LLM-generated code following user criticism

(github.com)

38 points | by ashleyn 8 hours ago ago

63 comments

  • longfacehorrace 7 hours ago ago

    Looked at repos of the two loudest users in that thread; either they have none or it's all forks of other projects.

    Non-contributors dictating how the hen makes bread.

    • ronsor 7 hours ago ago

      In general, caving to online mobs is a bad long-term strategy (assuming the mob is not the majority of your actual target audience[0]). The mob does not care about your project, product, or service, and it will not reward you for your compliance. Instead it only sees your compliance as a weakness to further target.

      [0] While this fact can be difficult to ascertain, one must remember that mobs are generally much, much louder than normal users, and normal users are generally quiet even when the mob is loud.

      • lich_king 7 hours ago ago

        Yes, but also... that's like 90% of the interactions you get on the internet?

        I don't want to be too meta, but isn't that a description of most HN threads? We show up to criticize other people's work for self-gratification. In this case, we're here here to criticize the dev caving in, even though most of us don't even know what Stoat is and we don't care.

        Except for some corner cases, most developers and content creators mostly get negative engagement, because it's less of an adrenaline rush to say "I like your work" than to say "you're wrong and I'm smarter than you". Many learn to live with it, and when they make a decision like that, it's probably because they actually agree with the mob.

        • ronsor 7 hours ago ago

          I don't actually care what the dev does. That's their prerogative, and it doesn't affect whether or not I'll use the software (I will if it's useful). I think that's the difference between here and a "mob", assuming other commenters think similarly.

          I do think it's harmful to cave in, but that doesn't make me think less of the maintainer's character. On the other hand, some of the commenters in the issue might decry them as evil if they made the "wrong" decision.

          It's fine to have opinions on the actions of others, but it's not fine to burn them at the stake.

        • longfacehorrace 6 hours ago ago

          Not just online; priests, CEOs, celebrities, politicians; don't make them happy you're a sinner, a bad employee, hater of freedom, etc.

          Anyone with a rhetorical opinion but who otherwise provides little to getting cars off assembly lines, homes built, network cables laid.

          In physical terms the world is full of socialist grifters in that they only have a voice, no skill. They are reliant on money because they're helpless to themselves.

          Engineers could rule the world of they acted collectively rather than start personal businesses. If we sat on our hands unless demands are met, the world stops.

          A lot of people in charge fear tech unions as we control the world getting shit done.

    • throawayonthe 7 hours ago ago

      isn't forks of other projects how you usually contribute code on github

    • rurban 5 hours ago ago

      Makes a good block list. Vehemently arguing to block AI

    • blibble 6 hours ago ago

      most of the anti-AI community have already migrated their repos from Slophub

  • singularfutur 7 hours ago ago

    Reverting a few trivial commits because of purity tests is a bad precedent. It rewards the loudest commenters and punishes maintainers.

    • imsofuture 7 hours ago ago

      It will be a painful decade until those who have already lost this weird ideological war ever realize it.

      • rsynnott 6 hours ago ago

        And which side is that? I mean, from my point of view, it seems like it’s probably the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible, rather than using a bloody library.

        (For whatever reason, LLM coding things seem to love to reinvent the square wheel…)

        • spankalee 6 hours ago ago

          > the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible

          Gee, I wonder which "side" you're on?

          It's not true that all AI generated code looks like it does the right thing but doesn't, or that all that human written code does the right thing.

          The code itself matters here. So given code that works, is tested, and implements the features you need, what does it matter if it was completely written by a human, an LLM, or some combination?

          Do you also have a problem with LLM-driven code completion? Or with LLM code reviews? LLM assisted tests?

          • rsynnott 5 hours ago ago

            Oh, yeah, I make no secret of which side I’m on there.

            I mean I don’t have a problem with AI driven code completion as such, but IME it is pretty much always worse than good deterministic code completion, and tends to imagine the functions which might exist rather than the functions which actually do. I’ve periodically tried it, but always ended up turning it off as more trouble than it’s worth, and going back to proper code completion.

            LLM code reviews, I have not had the pleasure. Inclined to be down on them; it’s the same problem as an aircraft or ship autopilot. It will encourage reduced vigilance by the human reviewer. LLM assisted tests seem like a fairly terrible idea; again, you’ve got the vigilance issue, and also IME they produce a lot of junk tests which mostly test the mocking framework rather than anything else.

        • kingstnap 6 hours ago ago

          Dependencies aren't free. If you have a library that has less than a thousand lines of code total that is really janky. Sometimes it makes sense like PicoHTTPParser but it often doesn't.

          Left-pad isn't a success story to be reproduced.

          • rsynnott 5 hours ago ago

            Not saying left pad is a good idea; I’m not a Javascript programmer, but my impression has always been that it desperately needs something along the lines of boost/apache commons etc.

            EDIT: I do wonder if some of the enthusiastic acceptance of this stuff is down to the extreme terribleness of the javascript ecosystem, tbh. LLM output may actually beat leftpad (beyond the security issues and the absurdity of having a library specifically to left pad things, it at least used to be rather badly implemented), but a more robust library ecosystem, as exists for pretty much all other languages, not so much.

        • GorbachevyChase 6 hours ago ago

          I’m not sure where you’ve been the last four years, but we’ve come a long way from GPT 3.5. There is a good chance your work environment does not permit the use of helpful tools. This is normal.

          I’m also not sure why programmatically generated code is inherently untrustworthy but code written by some stranger who is confidence in motives are completely unknown to you is inherently trustworthy. Do we really need to talk about npm?

        • ronsor 6 hours ago ago

          Not once in history has new technology lost to its detractors, even if half its proponents were knuckleheads.

          • latexr 6 hours ago ago

            Web3, Google Glass, Metaverse, NFTs…

          • rsynnott 5 hours ago ago

            Ah, yes. That’s why we all have our meetings in the metaverse, then go back home on the Segway, to watch 3d TV and order pizza from the robotic pizza-making van (an actual silly thing that SoftBank sunk a few hundred million into). And pay for the pizza in bitcoin, obviously (in fairness, notoriously, someone did do that once).

            That’s just dumb things from the last 20 years. I think you may be suffering from a fairly severe case of survivorship bias.

            (If you’re willing to go back _30_ years, well, then you’re getting into the previous AI bubble. We all love expert systems, right?)

          • BJones12 6 hours ago ago

            Nuclear power disagrees

            • raincole 6 hours ago ago

              Nuclear power will win (obviously). Unless you're talking about nuclear weapon.

          • hubertdinsk 6 hours ago ago

            latest counter-example is NFT.

            • ronsor 6 hours ago ago

              NFTs lost because they didn't do anything useful for their proponents, not because people were critical of them. They would've fizzled out even without detractors for that reason.

              On the other hand, normal cryptocurrencies continue to exist because their proponents find them useful, even if many others are critical of their existence.

              Technology lives and dies by the value it provides, and both proponents and detractors are generally ill-prepared to determine such value.

              • rsynnott an hour ago ago

                Okay, but during the NFT period, HN was trying to convince me that they were The Future. Same with metaverses, same with Bitcoin. I mean, okay, it is Different this time, so we are told. But there’s a boy who cried wolf aspect to all this, y’know?

                Baseline assumption: HN is full of people who assume that the current fad is the future. It is kind of ground zero for that. My HN account is about 20 years old and the zeitgeist has been right like once.

              • hubertdinsk 6 hours ago ago

                oh it's "because of this and that" now?

                The orignal topic was "not once blah blah...". I don't have to entertain you further, and won't.

              • blibble 6 hours ago ago

                moving the goalposts

          • qotgalaxy 6 hours ago ago

            [dead]

    • Seattle3503 6 hours ago ago

      This sort of purity policing happens to other open source mission driven projects. The same thing happens to Firefox. Open source projects risk spending all their time trying to satisfy a fundamentally extreme minority, while the big commercial projects act with impunity.

      It seems like it is hard to cultivate a community that cares about doing the right thing, but is focused and pragmatic about it.

    • Palomides 6 hours ago ago

      what if the users legitimately don't want AI written software?

      • raincole 6 hours ago ago

        You have to think twice if you really want to cater to these 'legitimate users' then. In Steam's review section you can find people give negative reviews just because the game uses Unity or Unreal. Should devs cater to them and develop their in-house engine?

        • Palomides 6 hours ago ago

          maybe? devs should weigh the feedback and decide what they think will best serve the project. open source is, especially, always in conversation with the community of both users and developers.

          • Seattle3503 5 hours ago ago

            > open source is, especially, always in conversation with the community of both users and developers

            Not necessarily. sqlite doesn't take outside contributions, and seems to not care too much about external opinion (at least, along certain dimensions). sqlite is also coincidentally a great piece of software.

      • minimaxir 6 hours ago ago

        Then they have the right to not use it: Stoat does not have a monopoly on chat software.

    • minimaxir 6 hours ago ago

      And then you have the "Alas, the sheer fact that LLM slop-code has touched it at all is bound to be a black stain on its record" comments.

    • blibble 6 hours ago ago

      maybe a preview of what's to come when the legal system rules the plagiarism machine's output is a derivative work?

      • spankalee 6 hours ago ago

        Since a human can also be a "plagiarism machine" (it's a potential copyright violation for both me and an LLM alike to create images of Mickey Mouse for commercial uses) it'll matter exactly what the output is, won't it?

  • pythonaut_16 7 hours ago ago

    Wastes of time like this are exactly why Stoat/Revolt is unlikely to ever be a serious Discord alternative

    • argee 7 hours ago ago

      Could you elaborate on this? I can’t tell whether you mean to say that open source projects run into user-initiated time sinks that detract from their productivity (which is arguably the case for any public facing project), or whether private repositories bypass this type of scrutiny by default which affords them an advantage, or whether this is about the Stoat/Revolt devs specifically and how they choose to spend their time.

      • ronsor 6 hours ago ago

        I think the parent comment is referring to the fact that even focusing on whether ~100 lines of code across 3 commits should/should not be generated by an LLM is meaningless bikeshedding which has no place in a serious project.

    • Palomides 6 hours ago ago

      why? I think having a stated policy on LLM use is increasingly unavoidable for FOSS projects

  • stavros 6 hours ago ago

    I love how people in the thread are like "if I'm going to ask my group of friends to switch to this, I need to know it's not written by security-issue-generator machines", meanwhile at Discord LLMs go brrr:

    https://discord.com/blog/developing-rapidly-with-generative-...

    • ronsor 6 hours ago ago

      To be fair, many of them are already fleeing Discord over the ID surveillance, so it makes sense that they would be pickier this time.

    • latexr 6 hours ago ago

      No one on the thread is advocating for Discord, so I don’t understand what argument you are making.

      • stavros 5 hours ago ago

        What non-LLM using service do you think the people saying "I can't switch to Stoat if it uses LLMs" are switching from?

        • latexr 5 hours ago ago

          You pointed out Discord are using LLMs, so by definition that can’t be the “non-LLM using service” they are switching from.

          But if they are switching from Discord, then that means they are unhappy with it too, thus they are not advocating for it.

          So, again, what’s your point?

          • stavros 4 hours ago ago

            My point is there is no non-LLM service. The commenters simply focus on the thing they saw, and didn't even bother comparing against their existing alternative.

            It's just the perfect world fallacy.

            • latexr 4 hours ago ago

              > My point is there is no non-LLM service.

              Considering Stoat just (supposedly) removed all LLM code from their code base, there is at least one. I’d expect, based on Meredith Whittaker’s stance regarding LLMs, that Signal also doesn’t have LLM code, though I haven’t verified.

              > The commenters simply focus on the thing they saw, and didn't even bother comparing against their existing alternative.

              I mean, how do you know? There is one mention of Discord in that thread. Making sweeping statements about “the commenters” doesn’t seem right.

  • sodality2 7 hours ago ago

    If only the average open source project got this level of scrutiny actually checking for vulnerabilities. I get that you don't want your private chats leaked by slopcode, but this was a few dozen lines of scaffolding in large software created before LLM coding; it would have been better to register your discontent without making demands, then continue to watch the repo for vulnerabilities. This feels like fervor without any work behind it

  • ronsor 7 hours ago ago

    It seems the thread was brigaded by militant anti-AI people upset over a few trivial changes made using an LLM.

    I encourage people here to go read the 3(!) commits reverted. It's all minor housekeeping and trivial bugfixes—nothing deserving of such religious (cultish?) fervor.

    • raincole 7 hours ago ago

      At this point perhaps to not disclose AI usage is the right thing to do. Transparency only feeds the trolls, unfortunately.

      • ronsor 7 hours ago ago

        I have been saying this for a few years at this point. Transparency can only exist when there is civility, and those without civility deserve no transparency[0].

        [0] As a corollary, those with civility do deserve transparency. It's a tough situation.

  • cat_plus_plus 6 hours ago ago

    "it's worth considering that there are many people with incredibly strong anti-LLM views, and those people tend to be minorities or other vulnerable groups."

    I have pretty low expectations for human code in that repository.

    • ronsor 6 hours ago ago

      The response mentioning minorities is obviously bad faith. Even if true, it's not really relevant, and most likely serves as a way to tie LLM use to slavery, genocide, or oppression without requiring rational explanation.

      • latexr 5 hours ago ago

        I just read it, and found no bad faith in it. It was polite, not pushy, explained the argument well (though of course you may disagree with it), gave a business reason, and even ended with “thank you for reading and considering this, if you do”.

        > and most likely serves as a way to tie LLM use to slavery, genocide, or oppression without requiring rational explanation.

        Assuming and ascribing nefarious motivations to a complete stranger can be considered bad faith, though. Probably not your intention, but that’s how it came across.

        • ronsor 4 hours ago ago

          I have observed this pattern before. Usually minority groups are mentioned in an attempt to shift a debate toward values (which basically means no meaningful debate if you disagree) and away from technical considerations (which arguably deserve the most attention in a software product).

          Aside from that, the statement is not empirically true (from my perspective at least). Evidence isn't provided either. I'm not saying that the commenter consciously wanted to tie LLM use to those negative things, but it could be done subconsciously, because I have genuinely seen those arguments before.

          • latexr 4 hours ago ago

            I understand your point and believe you believe it, which is why I mentioned I don’t think you were arguing in bad faith. What I am saying is I don’t think the commenter in question was acting in bad faith either, because that requires deception. In other words, it seems to me that commenter—like yourself—was arguing genuinely. If one agrees with their argument (or yours) is a different matter altogether, but bad faith it doesn’t seem to be.

            Hope that clarifies what I’m getting at.

    • Seattle3503 6 hours ago ago

      Is that claim even empirically true?

  • philipwhiuk 6 hours ago ago

    The fun part is this only happens because Claude Code commits its changes.

    If you use for example, GitHub Co-Pilot IDE integration, there's no evidence.

    • zihotki 3 hours ago ago

      There is also `git commit --amend` available, there are many ways to hide the evidence if one needs.

  • logicprog 7 hours ago ago

    What a shame

  • deadbabe 6 hours ago ago

    If you find yourself having to use LLMs to write a lot of tedious code, you have bad architecture. You should use patterns and conventions that eliminate the tedium, by making things automagically work. This means each line of code you write is more powerful, less filler stuff. Remember the days when you could create entire apps with just a few lines of code? So little code that an LLM would be pointless.

  • ragthr 7 hours ago ago

    Nice move! It is fun to watch the copyright thieves and their companies go into intellectual contortions (militant, purity tests, ideology) if their detrimental activities get any pushback.

    • 875765465609068 5 hours ago ago

      Nice move smashing those stocking frames! It is fun to watch the knitting pattern thieves and their companies go into intellectual contortions (militant, purity tests, ideology) if their detrimental activities get any pushback.