Fun! I wish WebTorrent had caught on more. I've always thought it had a worthy place in the modern P2P conversation.
In 2020, I messed around with a PoC for what hosting and distributing Linux distros could look like using WebTorrent[1]. The protocol project as a whole has a lovely and brilliant design but has stayed mostly stagnant in recent years. There are only a couple of WebRTC-enabled torrent trackers that have remained active and stable.
I think the issue has generally been that web torrent doesn't work enough like the real thing to do its job properly. There are huge bit torrent based streaming media networks out there, illicit, sure, but its a proven technology. If browsers had real torrent clients we would be having a very different conversation imo
I don't remember the web torrent issue numbers off the top of my head, but there are a number of long standing issues that seem blocked on webrtc limitations.
I think we still have the same blocker as we had back when WebTorrent first appeared; browsers cannot be real torrent clients and open connections without some initial routing for the discovery, and they cannot open bi-directional unordered connections between two browsers.
If we could say do peer discovery via Bluetooth, and open sockets directly from a browser page, we could in theory have local-first websites running in the browser, that does P2P connections straight between browsers.
Could you run some kind of hybrid DHT where part of it was Webrtc and part was plain HTTP(S) / WebSocket?
There are some nodes (desktop clients with UPNP, dedicated servers) that can accept browser connections. Those nodes could then help you exchange offers/answers to give you connections with the Webrtc-only ones, and those could facilitate offer/answer exchanges with their peers in turn.
It'd be dog-slow compared to the single-udp-packet-in, single-udp-packet-out philosophy of traditional mainline DHT, but I don't see why the idea couldn't work in principle.
I think a much bigger problem is content discovery and update distribution. You can't really do decentralized search because it'd very quickly get sybil-attacked to death. You'd always need some kind of centralized, trusted content index, but not necessarily one hosted on a centralized server. If you could have a reliable way to go from a pubkey to the latest hash signed by that pubkey in a decentralized way, + E.G. a Sqlite extension to get pages on-demand via WebTorrent, that would get you a long way towards solving the problem.
Yes, but it's STUN that sucks. If the software ships with a public (on the internet) relay/STUN server for connecting the two clients, it won't work if either aren't connected to the internet, even though the clients could still be on the same network and reach each other.
That seems like a nonissue for the purposes of this discussion though, in terms of user uptake. Tiktok and Facebook and other websites aren't exactly focused on serving to people on the same network.
> The Direct Sockets API addresses this limitation by enabling Isolated Web Apps (IWAs) to establish direct TCP and UDP connections without a relay server. With IWAs, thanks to additional security measures—such as strict Content Security Policy (CSP) and cross-origin isolation— this API can be safely exposed.
Though there's UPNP XML, it lacks auth for port forwarding permissions. There's also IPV6.
Similar: "Breaking the QR Limit: The Discovery of a Serverless WebRTC Protocol – Magarcia" https://news.ycombinator.com/item?id=46829296 re: Quick Share, Wi-Fi Direct, Wi-Fi Aware, BLE Beacons, BSSIDs and the Geolocation API
> XSS Protection - All HTML sanitized with DOMPurify
> Malicious Code Removal - Dangerous tags and attributes filtered
> Sandboxed Execution - Sites run in isolated iframe environment
I don't think that super makes sense. You probably just want the iframe sandbox and not remove all js. Or ideally put the torrent hash as the subdomain to use same origin policy.
I think one of the values of (what appears to be) AI generated projects like this is that they can make me aware of the underlying technology that I might not have heard about - for example WebTorrent: https://webtorrent.io/faq
Pretty cool! Not sure what this offers over WebTorrent itself, but I was happy to learn about its existence.
I'm planning to eventually launch an open source platform with the same name (peerweb.com) that I hope will be vastly more usable, with a distributed anti-abuse protocol, automatic asset distribution prioritization for highly-requested files, streaming UGC APIs (e.g. start uploading a video and immediately get a working sharable link before upload completion), proper integration with site URLs (no ugly uuids etc. visible or required in your site URLs), and adjustable latency thresholds to failover to normal CDNs whenever peers take too long to respond.
I put the project on hiatus years ago but I'm starting it back up soon! My project is not vibe coded and has thus far been manually architected with a deep consideration for both user and site owner expectations in the web ecosystem.
If it actually worked i could certainly see the value prop of not making users download a separate program. Generally downloading a separate program is a pretty big ask.
This is cool - I actually worked on something similar way back in the day: https://github.com/tom-james-watson/wtp-ext. It avoided the need to have any kind of intermediary website entirely.
The cool thing was it worked at the browser level using experimental libdweb support, though that has unfortunately since been abandoned. You could literally load URLs like wtp://tomjwatson.com/blog directly in your browser.
I think serving video is a particularly interesting use of Webtorrent. I think it would be good if you could add this as a front end to basically make sites DDOS proof. So you host like a regular site, but with a JS front end that hosts the site P2P the more traffic there is.
I think it is very difficult (and dangerous to the host) to serve user-uploaded videos at scale, particularly from a moderation standpoint. The problem is even worse if everyone is anonymous. There is a reason YouTube has such a monopoly on personal video hosting. Maybe developments in AI moderation will make it more palatable in the future.
The "host" is the user in this case. Every user that watches the video, shares the video. Given that discovery doesn't appear to be a part of this platform, any links would undoubtedly be shared "peer-to-peer" as well, so if you aren't looking at illegal things and don't have friends sending you illegal things to watch, it's perfectly safe.
I wonder if these colors are a kind of a watermark that are hardcoded as system instructions. Almost all slopware made using claude have the same color palette. So much for a random token generator to be this consistent
Yep, and I refuse to use sites that look like this. Lovable built frontend/landing pages have a similar feel. Instant lost of trust and desire to try it out.
Its probably more of a me "problem". But I'm sure there are plenty of others that share my sentiment. It doesn't really have anything to do with it being familiar, familiar can be good, but what I'm talking about is a familiar ugliness and lack of intention.
The Stripe or Shopify checkout is familiar, but it only became familiar because it was well designed and people wanted to keep using it.
Also when its obvious someone used an LLM, it bleeds into my overall opinion of the product whether the product is good or not. I assume less effort was put into the project, which is probably a fair assumption.
Ask any modern (post-GPT-2) LLM about a random color/name/city repeatedly a few dozen times, and you'll see it's not that random. You can influence this with a prompt, obviously, but if the prompt stays the same each time, the output is always very similar despite the existence of thousands of valid alternatives. Which is the case for any vibecoded thing that doesn't specify the color palette, in particular.
This effect is largely responsible for slop (as in annoying stereotypes). It's fixable in principle, but there's pretty little research and I don't see big AI shops care enough.
Before LLMs became big, I used emojis in my PRs and merge requests for fun and to break up the monotony a bit. Now I avoid them, lest I be accused of being a bot.
Cool. Some people complained about broken demos, I uploaded the mdwiki.info [1] website unaltered and seems to work fine [0]. MDwiki is a single .html file that fetches custom markdown via ajax relative to the html file and renders it via Javascript.
you can't stop someone from verbally describing certain objectionable material, therefore we should regulate the medium thru which sound travels and suck up all the oxygen on the planet. it's the only way to save the children
Not only did it take > 5 seconds to load a page, images were progressively loaded as fast as two at a time over the next minute or so - if there were no errors during transfer!
i wish stuff like this was more like double-click, agree, and use. they always make it complicated to where you're spending time trying to understand if you should continue to spend more time on this.
I tried this, the functional "Functionality test page:" is stuck on "Loading peer web site... connecting to peers". I can't load any website from this.
I feel like if it were combined with federated caching servers it would actually work. Then you would have persistence and the p2p part helps take load off popular content. There are now P2P databases that seem to operate with this. Combining the best of both worlds.
IPFS [1] requires a gateway unfortunately (whether remote or running locally). If you can use content idents that are supported by web primitives, you get the distributed nature without IPFS scaffolding required. Content is versioned by hash, although I haven't looked to see if mutable torrents [2] [3] are used in this implementation. Searching via distributed hash tables for torrent metadata, cryptographically signed by the publisher, remains as a requirement imho.
Bittorrent, in my experience, "just works," whether you're relying on a torrent server or a magnet link to join a swarm and retrieve data. So, this is an interesting experiment in the IPFS, torrent, filecoin distributed content space.
p2p storage as in torrent or IPFS or whatever is the part that we kinda' solved already. Serving/searching/addressing without the (centralized) DNS is still missing for a (urgently needed) p2p censorship resistant internet. Unfortunately this guy just uses some buzzwords to offer nothing new - why would I share links to that site instead of sharing torrent magnet links?
One issue I've had with IPFS is that there's nothing baked into the protocol to maintain peer health, which really limits the ability to keep the swarm connected and healthy.
Fun! I wish WebTorrent had caught on more. I've always thought it had a worthy place in the modern P2P conversation.
In 2020, I messed around with a PoC for what hosting and distributing Linux distros could look like using WebTorrent[1]. The protocol project as a whole has a lovely and brilliant design but has stayed mostly stagnant in recent years. There are only a couple of WebRTC-enabled torrent trackers that have remained active and stable.
1. https://github.com/leoherzog/LinuxExchange
I think the issue has generally been that web torrent doesn't work enough like the real thing to do its job properly. There are huge bit torrent based streaming media networks out there, illicit, sure, but its a proven technology. If browsers had real torrent clients we would be having a very different conversation imo
I don't remember the web torrent issue numbers off the top of my head, but there are a number of long standing issues that seem blocked on webrtc limitations.
I think we still have the same blocker as we had back when WebTorrent first appeared; browsers cannot be real torrent clients and open connections without some initial routing for the discovery, and they cannot open bi-directional unordered connections between two browsers.
If we could say do peer discovery via Bluetooth, and open sockets directly from a browser page, we could in theory have local-first websites running in the browser, that does P2P connections straight between browsers.
Could you run some kind of hybrid DHT where part of it was Webrtc and part was plain HTTP(S) / WebSocket?
There are some nodes (desktop clients with UPNP, dedicated servers) that can accept browser connections. Those nodes could then help you exchange offers/answers to give you connections with the Webrtc-only ones, and those could facilitate offer/answer exchanges with their peers in turn.
It'd be dog-slow compared to the single-udp-packet-in, single-udp-packet-out philosophy of traditional mainline DHT, but I don't see why the idea couldn't work in principle.
I think a much bigger problem is content discovery and update distribution. You can't really do decentralized search because it'd very quickly get sybil-attacked to death. You'd always need some kind of centralized, trusted content index, but not necessarily one hosted on a centralized server. If you could have a reliable way to go from a pubkey to the latest hash signed by that pubkey in a decentralized way, + E.G. a Sqlite extension to get pages on-demand via WebTorrent, that would get you a long way towards solving the problem.
That was you ask exists; it updates through a version counter. It just works on mainline DHT btw.
If a tracker could be connected to via WebRTC and had additional STUN functionality, would that suffice? Are there additional WebRTC limitations?
> they cannot open bi-directional unordered connections between two browsers.
Last I checked, DataChannels were bidirectional
Yes, but it's STUN that sucks. If the software ships with a public (on the internet) relay/STUN server for connecting the two clients, it won't work if either aren't connected to the internet, even though the clients could still be on the same network and reach each other.
That seems like a nonissue for the purposes of this discussion though, in terms of user uptake. Tiktok and Facebook and other websites aren't exactly focused on serving to people on the same network.
/? STUN: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
There is a Native Sockets spec draft that only Chrome implements;
"Direct Sockets API": https://developer.chrome.com/docs/iwa/direct-sockets :
> The Direct Sockets API addresses this limitation by enabling Isolated Web Apps (IWAs) to establish direct TCP and UDP connections without a relay server. With IWAs, thanks to additional security measures—such as strict Content Security Policy (CSP) and cross-origin isolation— this API can be safely exposed.
Though there's UPNP XML, it lacks auth for port forwarding permissions. There's also IPV6.
Similar: "Breaking the QR Limit: The Discovery of a Serverless WebRTC Protocol – Magarcia" https://news.ycombinator.com/item?id=46829296 re: Quick Share, Wi-Fi Direct, Wi-Fi Aware, BLE Beacons, BSSIDs and the Geolocation API
"If browsers had real torrent clients we would be having a very different conversation imo"
The elinks text-only browser has a "real" torrent client
http://bittorrented.com
Oh wow
Was there ever a web-based Jigdo?
> Enhanced security with DOMPurify integration!
> XSS Protection - All HTML sanitized with DOMPurify > Malicious Code Removal - Dangerous tags and attributes filtered > Sandboxed Execution - Sites run in isolated iframe environment
I don't think that super makes sense. You probably just want the iframe sandbox and not remove all js. Or ideally put the torrent hash as the subdomain to use same origin policy.
I think one of the values of (what appears to be) AI generated projects like this is that they can make me aware of the underlying technology that I might not have heard about - for example WebTorrent: https://webtorrent.io/faq
Pretty cool! Not sure what this offers over WebTorrent itself, but I was happy to learn about its existence.
Every time I try these they never work, including this one.
I’m not sure what the value prop is over just using a torrent client?
Maybe when they’re less buggy they’ll become a thing.
I'm planning to eventually launch an open source platform with the same name (peerweb.com) that I hope will be vastly more usable, with a distributed anti-abuse protocol, automatic asset distribution prioritization for highly-requested files, streaming UGC APIs (e.g. start uploading a video and immediately get a working sharable link before upload completion), proper integration with site URLs (no ugly uuids etc. visible or required in your site URLs), and adjustable latency thresholds to failover to normal CDNs whenever peers take too long to respond.
I put the project on hiatus years ago but I'm starting it back up soon! My project is not vibe coded and has thus far been manually architected with a deep consideration for both user and site owner expectations in the web ecosystem.
If it actually worked i could certainly see the value prop of not making users download a separate program. Generally downloading a separate program is a pretty big ask.
This is cool - I actually worked on something similar way back in the day: https://github.com/tom-james-watson/wtp-ext. It avoided the need to have any kind of intermediary website entirely.
The cool thing was it worked at the browser level using experimental libdweb support, though that has unfortunately since been abandoned. You could literally load URLs like wtp://tomjwatson.com/blog directly in your browser.
This is pretty interesting!
I think serving video is a particularly interesting use of Webtorrent. I think it would be good if you could add this as a front end to basically make sites DDOS proof. So you host like a regular site, but with a JS front end that hosts the site P2P the more traffic there is.
I think it is very difficult (and dangerous to the host) to serve user-uploaded videos at scale, particularly from a moderation standpoint. The problem is even worse if everyone is anonymous. There is a reason YouTube has such a monopoly on personal video hosting. Maybe developments in AI moderation will make it more palatable in the future.
The "host" is the user in this case. Every user that watches the video, shares the video. Given that discovery doesn't appear to be a part of this platform, any links would undoubtedly be shared "peer-to-peer" as well, so if you aren't looking at illegal things and don't have friends sending you illegal things to watch, it's perfectly safe.
There is PeerTube for video content.
I wonder if these colors are a kind of a watermark that are hardcoded as system instructions. Almost all slopware made using claude have the same color palette. So much for a random token generator to be this consistent
Yep, and I refuse to use sites that look like this. Lovable built frontend/landing pages have a similar feel. Instant lost of trust and desire to try it out.
Its interesting - AI has a certain style. You can see it in pictures and even text content. It does instantly get my guard up.
That's interesting - do you think because it's familiar to you?
Would it be the case for folks who don't have any idea what Lovable is.
Familiar UI is similar to what Tailwind or Bootstrap offers, do they do something different to keep it fresh?
Average internet users/consumers are likely used to the default Shopify checkout.
Its probably more of a me "problem". But I'm sure there are plenty of others that share my sentiment. It doesn't really have anything to do with it being familiar, familiar can be good, but what I'm talking about is a familiar ugliness and lack of intention.
The Stripe or Shopify checkout is familiar, but it only became familiar because it was well designed and people wanted to keep using it.
Also when its obvious someone used an LLM, it bleeds into my overall opinion of the product whether the product is good or not. I assume less effort was put into the project, which is probably a fair assumption.
https://en.wikipedia.org/wiki/Mode_collapse
Ask any modern (post-GPT-2) LLM about a random color/name/city repeatedly a few dozen times, and you'll see it's not that random. You can influence this with a prompt, obviously, but if the prompt stays the same each time, the output is always very similar despite the existence of thousands of valid alternatives. Which is the case for any vibecoded thing that doesn't specify the color palette, in particular.
This effect is largely responsible for slop (as in annoying stereotypes). It's fixable in principle, but there's pretty little research and I don't see big AI shops care enough.
Emojis on every line are an AI tell. The times I do use AI (shhhh...) I always remove them and tweak the language a bit.
Before LLMs became big, I used emojis in my PRs and merge requests for fun and to break up the monotony a bit. Now I avoid them, lest I be accused of being a bot.
Isn't it mostly ChatGPT that does that?
Grok almost never uses emojis.
No Javascript
https://github.com/Omodaka9375/peerweb
https://github.com/Omodaka9375/peerweb/releases/expanded_ass...
If the address is a hash perhaps it could contain a public key
Cool. Some people complained about broken demos, I uploaded the mdwiki.info [1] website unaltered and seems to work fine [0]. MDwiki is a single .html file that fetches custom markdown via ajax relative to the html file and renders it via Javascript.
[0]: https://peerweb.lol/?orc=b549f37bb4519d1abd2952483610b8078e6...
[1]: https://dynalon.github.io/mdwiki/
Why is it called MDwiki? It's clearly not a wiki.
Sure, in a sense, but “wiki” actually just means “quick”.
I can't imagine that Peerweb has much in the way of stopping certain types of material from being uploaded.
Smaller site likely have a smaller footprint
you can't stop someone from verbally describing certain objectionable material, therefore we should regulate the medium thru which sound travels and suck up all the oxygen on the planet. it's the only way to save the children
Github: https://github.com/omodaka9375/peerweb
Thanks! we'll put that link in the toptext.
love this. I've been working on something similar for months now
https://metaversejs.github.io/peercompute/
it's a gpgpu decentralized heterogeneous hpc p2p compute platform that runs in the browser
Nice, I clicked on the first demo, and I got stuck at connecting with peers.
I like the idea though.
This is probably going to be taken down like my site was that used Web Torrent.
dropclickpaste.com is for sale. kruhft.at.gmail.com
What do you all think of the chances that we have decentralized AI infrastructure like this at some point?
Useless if it takes > 5 sec. to load a page
You never lived the 90's
lol.
Not only did it take > 5 seconds to load a page, images were progressively loaded as fast as two at a time over the next minute or so - if there were no errors during transfer!
OT: Can someone vibe-code Geocities back to life?
Check out neocities.org
you made my life. Thank you life long internet friend.
That would take forever. If you can get the domain I'll hand code it in perl.
<marquee><blink>Neat!!</blink></marquee>
give me the tokens.
Similar project I vibe coded a few weeks ago: "Gnutella/Limewire but WebRTC".
https://github.com/RickCarlino/hazelhop
It works, though probably needs some cleanup and security review before being used seriously (thus no running public instance).
Somebody has to revive Nullsoft WASTE p2p from 2003 tho
i wish stuff like this was more like double-click, agree, and use. they always make it complicated to where you're spending time trying to understand if you should continue to spend more time on this.
I tried this, the functional "Functionality test page:" is stuck on "Loading peer web site... connecting to peers". I can't load any website from this.
https://imgur.com/gallery/loaidng-peerweb-site-uICLGhK
Yes, none work for me. They either don’t have peers, or the few ones are on a very slow network.
None of the demo sites work for me.
Probably needs more testing and debugging.
In its own reimagined way from what’s possible in 2026, this could kick off a new kind of geocities.
Good, important idea. Unfortunately bad, low effort vibe coded execution
Still a shipped idea, driven by someone. The author has some other interesting ideas.
Nice idea. Shame absolutely everything about the website screams AI slop.
I feel like if it were combined with federated caching servers it would actually work. Then you would have persistence and the p2p part helps take load off popular content. There are now P2P databases that seem to operate with this. Combining the best of both worlds.
I don't get it, I upload my files to your site, then I send my friends links to your site? How is this not a single point of failure?
[sorry for the weird timestamps - the OP was submitted a while ago and I just re-upped it.]
did the test sites work for you when you tried it? because none worked for me, and for at least two other commenters here.
https://news.ycombinator.com/item?id=46830158
https://news.ycombinator.com/item?id=46830183
IPFS [1] requires a gateway unfortunately (whether remote or running locally). If you can use content idents that are supported by web primitives, you get the distributed nature without IPFS scaffolding required. Content is versioned by hash, although I haven't looked to see if mutable torrents [2] [3] are used in this implementation. Searching via distributed hash tables for torrent metadata, cryptographically signed by the publisher, remains as a requirement imho.
Bittorrent, in my experience, "just works," whether you're relying on a torrent server or a magnet link to join a swarm and retrieve data. So, this is an interesting experiment in the IPFS, torrent, filecoin distributed content space.
[1] https://ipfs.tech/
[2] https://news.ycombinator.com/item?id=29920271
[3] https://www.bittorrent.org/beps/bep_0046.html
You don't hear much these days about IPFS, but I can remember one big problem with it was illegal content and how to deal with it.
This isn't my site, nor do I have any opinions on the implementation here. I do however find the idea of serving web pages via torrent interesting.
p2p storage as in torrent or IPFS or whatever is the part that we kinda' solved already. Serving/searching/addressing without the (centralized) DNS is still missing for a (urgently needed) p2p censorship resistant internet. Unfortunately this guy just uses some buzzwords to offer nothing new - why would I share links to that site instead of sharing torrent magnet links?
Thinking about this a little bit... could we use a blockchain ledger as an authoritative source for DNS records?
User's can publish their DNS + pub key to the append-only blockchain, signed with their private key.
Use a torrent file to connect to an initial tracker to download the blockchain.
Once the blockchain is downloaded, every computer would have a full copy of the DNS database and could use that for discoverability.
I have no experience with blockchains or building trackers, so maybe this is a dumb idea.
Its been tried/done but attracted the same audience of investors looking to make a quick buck as opposed to looking to actually make it work.
From what i've seen you need some minimum percentage of makeithappen-ers amoung those interested in a project.
It seems the guy running the extension just left. With minimum influence on the value.
https://addons.mozilla.org/en-US/firefox/addon/b-dns/
https://www.coinbase.com/en-nl/price/namecoin
Look into IPFS and ENS.
This is a great point.
One issue I've had with IPFS is that there's nothing baked into the protocol to maintain peer health, which really limits the ability to keep the swarm connected and healthy.
I use to add webseeds but clients seem to love just downloading it from there rather than from my conventional seeding.
Some new ideas are needed in this space.
You make a good point.