Maybe people should stop posting shitty LLM-written articles that don't generate any good discussion beyond "I think this was written by an LLM" and we won't have this "problem".
If "quality" is all that matters and maximizing quality is the goal, and if LLMs can generate higher quality comments more consistently than humans, we should close all user accounts. Don't even have this be a forum anymore. Have LLMs crawl the web, post articles then generate threads discussing them from various simulated points of view. No direct human participation, no Eternal September. Then readers can have their own agents summarize the threads for them.
We can consider this the carcinization of online discourse - everything evolves towards the optimum of LLM summarization.
No, because my definition of "quality" for comments implicitly includes human intent, which LLMs lack.
But I suspect a lot of people on HN only view these threads as data and that for them "quality" only exists within the semantics and structure of the text itself, and the human element doesn't matter to them.
My honest opinion is just to accept it, move on and continue writing, building, creating stuff that will resonate at least with a bunch of people. Unfortunately, what is karma farming here or on Reddit will be hateful or so comments on YouTube, etc. depending on the platform.
Ignore or downvote or flag [1] depending on your confidence in your judgement, your perception of its severity of impact on the HN community, your mood, etc.
Maybe people should stop posting shitty LLM-written articles that don't generate any good discussion beyond "I think this was written by an LLM" and we won't have this "problem".
When you encounter these comments/sentiment, pretend that LLM = Loweffort Long Mumbling. In other words, poor writing.
Detection of "LLM" is a red herring. Quality is what matters. Always has been. Assess comment quality holistically, and you'll be fine.
If "quality" is all that matters and maximizing quality is the goal, and if LLMs can generate higher quality comments more consistently than humans, we should close all user accounts. Don't even have this be a forum anymore. Have LLMs crawl the web, post articles then generate threads discussing them from various simulated points of view. No direct human participation, no Eternal September. Then readers can have their own agents summarize the threads for them.
We can consider this the carcinization of online discourse - everything evolves towards the optimum of LLM summarization.
> if LLMs can generate higher quality comments more consistently than humans
Do you believe this?
No, because my definition of "quality" for comments implicitly includes human intent, which LLMs lack.
But I suspect a lot of people on HN only view these threads as data and that for them "quality" only exists within the semantics and structure of the text itself, and the human element doesn't matter to them.
My honest opinion is just to accept it, move on and continue writing, building, creating stuff that will resonate at least with a bunch of people. Unfortunately, what is karma farming here or on Reddit will be hateful or so comments on YouTube, etc. depending on the platform.
The answer has been the same since the days of Moses:
Drown it out with high quality submissions and high quality comments.
add a less severe "Flag, as AI" button
That would be the little downwards-facing arrow to the left.
Why can't we just use Flag?
Ignore or downvote or flag [1] depending on your confidence in your judgement, your perception of its severity of impact on the HN community, your mood, etc.
Just like any other behavior you don’t like.
[1] logically upvoting is also an option.
This is LLM
openclaw. Pure AGI.
HN should add some kind of LLM detection. Preferably something that rates how unhinged a comment is.
Smoke me a kipper i'll be back for breakfast.
> rates how unhinged a comment is
No can do, too many false positives considering the usual demographics.
The thing that rates how unhinged a comment is is the downvote button, or flag button in extreme cases.
LLM detection is basically witchcraft, though, for all but the most obvious cases.