My mind goes to simple solutions like established communities having a $1 entry fee, for privacy use a privacy crypto maybe but that's a decent amount of friction for average folk with the current UX.
Another interesting idea that comes to mind is that every post/comment made needs the user to physically use their fingerprint scanner on their device which I assume plenty of devices have already. As long ad it can't be spoofed it works but not sure about the details about reliably securing that.
It would be some friction but I feel like it would be fine?
The biggest signal I have noticed over time is consistency, not just one good post. Accounts that participate normally for weeks build a kind of trust naturally. Maybe weighting activity history more than identity verification could help without hurting anonymity.
Creates echo chambers, karma whoring 'power' accounts, rewards ego-posting and generally makes the experience about who says what not what is said. Worsens the problem.
We don't solve it. What happens is the people who don't like it will eventually leave, and everyone else will normalize it because this is a tech forum and within tech AI has already won. There's no solution that allows humans to post which doesn't allow humans to post through an LLM, and as LLMs mature, all of the "tells" people think they have that can distinguish them from normal human speech will vanish.
I predict that within a year most comments on HN will be run through LLMs, and that someone will create a service specifically for doing so (not just on HN but on multiple platforms.) Entirely vibe-coded, of course. The mods won't like it, they've been very clear that LLM generated comments aren't welcome. Unfortunately I don't think they can stop it any more than King Canute could stop the tide.
One issue I keep noticing is that most anti-bot systems optimize
for blocking instead of increasing friction progressively.
Rate limits tied to behavioral patterns rather than identity
seem to work better — especially interaction timing,
navigation flow, or session consistency.
We experimented with something similar while building HiveHQ
and found bots usually fail when systems require small
contextual actions humans do naturally.
could solve bot flooding by raising the cost of automation, not by removing anonymity. Techniques like behavioral detection, rate limiting, proof‑of‑work, reputation systems, and AI‑based anomaly detection can filter bots without requiring real‑world identity. The goal isn’t to know who you are — it’s to know whether you’re human.
One of my least favorite patterns online: sites that decide that I’m a bot because I open a whole bunch of tabs in the space of 15 seconds with the products I want to evaluate or articles I want to read.
There is no way to solve it without going to tribalism.
Bots and AI right now as good as the "average" joe.
All the places that can move the perception of real people about products, politics or any form of power will and is being flooded with bots.
The reason for the "ID" on internet is not because of the children.
But because the bots are soo good they need to use ID to filter what is bot or not. Avoiding the dead internet.
The powers that be NEED to sway perception and narrative to their liking.
think about the children! Epstein list, patriot act, etc.
My mind goes to simple solutions like established communities having a $1 entry fee, for privacy use a privacy crypto maybe but that's a decent amount of friction for average folk with the current UX.
Another interesting idea that comes to mind is that every post/comment made needs the user to physically use their fingerprint scanner on their device which I assume plenty of devices have already. As long ad it can't be spoofed it works but not sure about the details about reliably securing that.
It would be some friction but I feel like it would be fine?
A lot of devices have fingerprint scanners and faceID. But it isn't used by everybody.
I haven't used it since 2017.
The biggest signal I have noticed over time is consistency, not just one good post. Accounts that participate normally for weeks build a kind of trust naturally. Maybe weighting activity history more than identity verification could help without hurting anonymity.
Creates echo chambers, karma whoring 'power' accounts, rewards ego-posting and generally makes the experience about who says what not what is said. Worsens the problem.
Echo chambers exist no matter what & "who says what" is an essential aspect of determining transferrable credibility.
Without transferrable credibility, any ratings system simply becomes a question of which side spams the most.
We don't solve it. What happens is the people who don't like it will eventually leave, and everyone else will normalize it because this is a tech forum and within tech AI has already won. There's no solution that allows humans to post which doesn't allow humans to post through an LLM, and as LLMs mature, all of the "tells" people think they have that can distinguish them from normal human speech will vanish.
I predict that within a year most comments on HN will be run through LLMs, and that someone will create a service specifically for doing so (not just on HN but on multiple platforms.) Entirely vibe-coded, of course. The mods won't like it, they've been very clear that LLM generated comments aren't welcome. Unfortunately I don't think they can stop it any more than King Canute could stop the tide.
Read this comment and use the script in the linked subject:
https://news.ycombinator.com/item?id=47203918
One issue I keep noticing is that most anti-bot systems optimize for blocking instead of increasing friction progressively.
Rate limits tied to behavioral patterns rather than identity seem to work better — especially interaction timing, navigation flow, or session consistency.
We experimented with something similar while building HiveHQ and found bots usually fail when systems require small contextual actions humans do naturally.
So... use advanced pattern matching to determine human patterns & reject outliers?
Interaction timing is like rate limiting, but more granular
Navigation flow is a basically requiring bots to use a headless browser instead of API's
What does session consistency mean in this context? Restricting to a limited number of interests & activity times?
could solve bot flooding by raising the cost of automation, not by removing anonymity. Techniques like behavioral detection, rate limiting, proof‑of‑work, reputation systems, and AI‑based anomaly detection can filter bots without requiring real‑world identity. The goal isn’t to know who you are — it’s to know whether you’re human.
One of my least favorite patterns online: sites that decide that I’m a bot because I open a whole bunch of tabs in the space of 15 seconds with the products I want to evaluate or articles I want to read.
Haven't noticed any negative changes.
OP, you're on the right track.
The question you need to ask yourself is "What's the end game?"
What happens when users' feeds are full of users that they already know?
You think they'll be satisfied with that?
make spamming illegal give severe punishments and enforce the law
There is no way to solve it without going to tribalism.
Bots and AI right now as good as the "average" joe.
All the places that can move the perception of real people about products, politics or any form of power will and is being flooded with bots.
The reason for the "ID" on internet is not because of the children. But because the bots are soo good they need to use ID to filter what is bot or not. Avoiding the dead internet.
The powers that be NEED to sway perception and narrative to their liking.
think about the children! Epstein list, patriot act, etc.