HTTP only is fundamentally disrespectful to your users. It places your needs above theirs. It assumes that your threat model is the same as theirs. There is no excuse for it in 2026.
Can't you get certificates by doing DNS challenges and use those certificates internally? If you don't have to be completely airgapped, doing the DNS challenges shouldn't be too hard.
It is my understanding that DNS challenges are discouraged and/or being deprecated due to the challenge results being less trustworthy than more stringent verification methods. There is also the operational overhead that arises as SSL certificate lifetimes shorten; It is my understanding that there is now a case being made for SSL certificate lifetimes shorter than 24 hours.
I don’t know about the DNS challenge being discouraged, do you have something to read up on that? As far as I know it’s the only common way to get a wildcard cert.
And also the lifetime isn’t a problem in the setup I described, the internal server that uses the cert can do the dns challenge so it can get a new cert whenever it wants. It only needs to be able to access the DNS api.
I must correct myself; The DNS challenge is indeed being discouraged in the future, but it is because the DNS-01 challenge is being replaced by the DNS-PERSIST-01 challenge which addresses deficiencies in DNS-01.
The trust and security issues associated with maintaining intranet resources vs. outsourcing to a dedicated professional cloud service provider remain, but are not related to whether any SSL certificates used are issued through DNS-based verification or not.
DNS challenges are a massive PITA, too. I used them for wildcard certificates but gave up after a couple years because manually renewing them every three months was super annoying.
Unfortunately it is not easy to automate either especially if you use multiple domain providers. Not every hosting has an API and Namecheap wanted $50 for enabling it if I remember correctly.
> It is currently not possible to keep your internal network private and still have HTTPS without hacks or problems on standard end user devices.
Only if you consider transferring the cert from the public server to your internal server a hack.
But how would it ever work otherwise? The CA needs to have some publicly accessible way to check your control of the domain, right?
You need a fake DNS entry on your local network for this to work - I would call that a hack.
And what if you aren't running a public webserver like 99% of normal people out there?
> But how would it ever work otherwise? The CA needs to have some publicly accessible way to check your control of the domain, right?
I mean that's exactly the problem: Why do you have to rely on the public CA infrastructure for local devices?
Consider the scenario of a smart wifi bulb in your local network that you want to control with your smartphone.
IMO it would be great to have your home router act as a local CA that can only issue certificates for .local domains and have that trusted per default by user agents. Would make smart home stuff a lot better than the current situation...
> IMO it would be great to have your home router act as a local CA that can only issue certificates for .local domains and have that trusted per default by user agents. Would make smart home stuff a lot better than the current situation...
How would you talk to the router and make sure the communication is actually with the router and not someone else?
The browser/lightbulb comes with trusted CAs preinstalled, but then you would have to install the routers CA cert on every device you add to the network.
Sure, if someone knows your WiFi password they could set up an "evil" router close to your house with the same SSID and credentials, or they could break into your house and install LAN wiretaps, but c'mon, if you are this paranoid you probably don't even have a smartphone in the first place.
Do you mean that you don’t need a way to verify the routers identity on the local network because it is already protected by a password?
Firstly, I don’t think that’s true because you add a lot of sketchy and unknown devices to your network over time (guests, streaming stick, computer with preinstalled OS…) so I wouldn’t trust every device in my WiFi.
And also, if you do trust your network, you don’t really need https inside it, right?
Would be interested in more details on how this was built. The title of the page "VibeScan Tuner" seems to suggest this was vibecoded but is this actually crawling through IP space or hitting something like shodan?
I plan to do a write-up on the architecture and journey. It did start as a vibe coding experiment, and quickly evolved into a Shodan-like screenshotting OSINT thing. Agents generate random IP addresses, check for HTTP, and attempt to screenshot. If a screenshot is taken, then it becomes a 'result' submitted to the backend.
HTTP is incomparibly less fragile than HTTPS which is why HTTP+HTTPS is such a great solution for websites made by human persons for human persons. Lets be clear, corporate or institutional persons using HTTPS alone is fine and reasonable. But for human use cases HTTP+HTTPS gets you the best of both worlds. No HTTPS cert system ever survives longer than a few years without human input/maintainence. There's just too much changing and too much complexity. From the software of the user to the software of the webserver.
Which is to say, HTTP is not some "ancient" tech like an analog television. It is a modern technology used today doing things that HTTPS can't.
Agree strongly. An expired cert is better than no cert.
Also would argue maintenance is only as complicated as you make it for yourself. Countless people keep patched, secure, https web servers running with minimal effort. If its somehow effort, introspect some on why you are somehow making so much work for yourself.
That's no use when your automated registrar stops working in 3 years because it went out of business or changed protocols. Let's Encrypt has been an outlier.
Might be a bit of each of us touching different ends of the elephant. To be clear I am talking about long timespans. Lets Encrypt hasn't even existed for a full decade yet. During that time it's dropped support entirely for the original acme protocol. During that time it's root certs have expired at least twice (only those I remember where it caused issues in older software). And that's ignoring the churn in acme/acme2 clients and specific OS/Distro cert choice issues and browser CA issues. Saying that there's no trouble with HTTPS must be coming from experiences on short timescales (ie, a few years).
HTTP/3 already doesn't allow anything but CA TLS only. It won't be too long before they no longer allow you to click through CA TLS warnings.
If human people want things to be on the web for long time periods those things should be served HTTP+HTTPS.
There is some kind of middle ground here.. My first HTML file still renders like it did on Mosaic. The HTTP server I used back then still works today 35 years later without maintenance. I do agree that HTTPS is a simple solution but there is too much cargo cult around it. Honestly I do not see the use to maintain everything published if you follow sane practices.
EDIT: I have 15 year old things at work that do not compile, you have to maintain it for sure, biggest problem is cryptography. I am not sure that unstable tech should be part of the application ever.
Unless I'm misunderstanding your point, your HTTP server from 35 years ago is still working today without any maintenance?
Does that mean no security patching and no updates for bugfixes? or does "no maintenance" means something else I'm missing?
I find it difficult to discuss these topics when comments like these pretend that you can leave your system exposed on the internet for years without any maintenance.
If we're talking applications that don't actively listen on the internet that's fine, and I would agree that we should have complete software that just works.
But a webserver, unless it's for personal/home use, it's on the internet and I don't see how it could work for 35 years without any update/change
Static html webservers don't really have any need for security patching or bugfixes constantly like dynamic complex stuff. They literally can just live forever. The sites themselves are just files. Not applications.
I’m not sure whether this applies globally, but in Japan, around 2015, some mobile carriers deployed a “traffic optimization” feature that would lossily compress images in transit.
On the platforms of NTT Docomo and KDDI (au), users could opt out of this behavior. However, with SoftBank, it could not be disabled, which led to controversy.
As you might expect, this caused issues—since the image data was modified, the hash values changed. As a result, some game apps detected downloaded image files as corrupted and failed to load them properly.
Needless to say, this was effectively a man-in-the-middle attack, so it did not work over HTTPS.
Within a couple of years, the feature seems to have been quietly discontinued.
There were also concerns that this might violate the secrecy of communications, but at least the government authorities responsible for telecommunications did not take any concrete action against it.
This event sounds much more realistic/common, the motivation of an ISP to save bandwidth costs is much more likely/frequent than the motivation of an ISP to monetize through ads (in addition to monthly service fees).
Where as my ISP did not put in ads, they did inject messages such as maintenance was going to occur and did things like redirect bad dns to their own search.
Also ISPs were monitoring and selling browsing data years ago.
Cox Communications used to do it in California to inject JS into sites. I remember seeing little Cox popup/toast messages in the corner of other sites.
This is such a weird framing. HTTPS is HTTP. TLS is at a different layer of the network stack. You may as well say HTTP through a proxy is better or worse than HTTP through a VPN; all of those statements are equally nonsensical.
You are simply arguing that insecure network requests require less work. Which is obviously true. TLS did not appear out of nothing. Much effort was expended to create it, and there's a reason
But, as we learned with the telnet filter going into place, we exist on the network at the pleasure of everyone else. Their concerns must come before ours. The needs of the many outweigh the needs of the few.
Using HTTP does not guarantee your content can be read, since it can be modified in transit. Your content could be replaced entirely and you would never know unless someone reported it to you.
Invalid certificates are one thing, and you can probably click through that. But maybe your older browser tops out at TLS 1.0, and servers don't offer that anymore (I think the credit card PCI cert discourages it) or maybe your older browser can't do ECC certs and the server you want to talk to only has an ECC cert.
Or maybe your older server only speaks TLS 1.0 and that's not cool anymore. Or it could only use sha1 certs, so it can't get a current cert.
When I can, I like to server http and https, and serve the favicon with HTTPS and use HSTS to induce current clients to use https for everything. Finally, a use for the favicon.
The author should check to see if the HTTP response body contains "nginx" or "apache" and just filter those out. Seems like at least 50% of what I'm seeing.
Also would be nice if there was a hotlink to view the original site directly from the index page.
The search page lets you add multiple exclude filters to the aggregation pipeline. So as you filter common strings, the interesting results bubble to the top.
If you click the image it should take you to an info page on the service.
HTTP only is fundamentally disrespectful to your users. It places your needs above theirs. It assumes that your threat model is the same as theirs. There is no excuse for it in 2026.
HTTP is still the best solution for intranet sites... as long as you cannot run your own fully local CA as hassle-free as DHCP, HTTP will never die.
Can't you get certificates by doing DNS challenges and use those certificates internally? If you don't have to be completely airgapped, doing the DNS challenges shouldn't be too hard.
It is my understanding that DNS challenges are discouraged and/or being deprecated due to the challenge results being less trustworthy than more stringent verification methods. There is also the operational overhead that arises as SSL certificate lifetimes shorten; It is my understanding that there is now a case being made for SSL certificate lifetimes shorter than 24 hours.
I don’t know about the DNS challenge being discouraged, do you have something to read up on that? As far as I know it’s the only common way to get a wildcard cert.
And also the lifetime isn’t a problem in the setup I described, the internal server that uses the cert can do the dns challenge so it can get a new cert whenever it wants. It only needs to be able to access the DNS api.
I must correct myself; The DNS challenge is indeed being discouraged in the future, but it is because the DNS-01 challenge is being replaced by the DNS-PERSIST-01 challenge which addresses deficiencies in DNS-01.
The trust and security issues associated with maintaining intranet resources vs. outsourcing to a dedicated professional cloud service provider remain, but are not related to whether any SSL certificates used are issued through DNS-based verification or not.
DNS challenges are a massive PITA, too. I used them for wildcard certificates but gave up after a couple years because manually renewing them every three months was super annoying.
Unfortunately it is not easy to automate either especially if you use multiple domain providers. Not every hosting has an API and Namecheap wanted $50 for enabling it if I remember correctly.
You could also manually install CA certificates on every client device, or you can tell users to live with the security warnings shown by browsers...
It is currently not possible to keep your internal network private and still have HTTPS without hacks or problems on standard end user devices.
> It is currently not possible to keep your internal network private and still have HTTPS without hacks or problems on standard end user devices.
Only if you consider transferring the cert from the public server to your internal server a hack. But how would it ever work otherwise? The CA needs to have some publicly accessible way to check your control of the domain, right?
You need a fake DNS entry on your local network for this to work - I would call that a hack.
And what if you aren't running a public webserver like 99% of normal people out there?
> But how would it ever work otherwise? The CA needs to have some publicly accessible way to check your control of the domain, right?
I mean that's exactly the problem: Why do you have to rely on the public CA infrastructure for local devices?
Consider the scenario of a smart wifi bulb in your local network that you want to control with your smartphone.
IMO it would be great to have your home router act as a local CA that can only issue certificates for .local domains and have that trusted per default by user agents. Would make smart home stuff a lot better than the current situation...
> IMO it would be great to have your home router act as a local CA that can only issue certificates for .local domains and have that trusted per default by user agents. Would make smart home stuff a lot better than the current situation...
How would you talk to the router and make sure the communication is actually with the router and not someone else? The browser/lightbulb comes with trusted CAs preinstalled, but then you would have to install the routers CA cert on every device you add to the network.
In the case of WiFi, you use a password and WPA2?
Sure, if someone knows your WiFi password they could set up an "evil" router close to your house with the same SSID and credentials, or they could break into your house and install LAN wiretaps, but c'mon, if you are this paranoid you probably don't even have a smartphone in the first place.
Do you mean that you don’t need a way to verify the routers identity on the local network because it is already protected by a password?
Firstly, I don’t think that’s true because you add a lot of sketchy and unknown devices to your network over time (guests, streaming stick, computer with preinstalled OS…) so I wouldn’t trust every device in my WiFi.
And also, if you do trust your network, you don’t really need https inside it, right?
Would be interested in more details on how this was built. The title of the page "VibeScan Tuner" seems to suggest this was vibecoded but is this actually crawling through IP space or hitting something like shodan?
I plan to do a write-up on the architecture and journey. It did start as a vibe coding experiment, and quickly evolved into a Shodan-like screenshotting OSINT thing. Agents generate random IP addresses, check for HTTP, and attempt to screenshot. If a screenshot is taken, then it becomes a 'result' submitted to the backend.
HTTP is incomparibly less fragile than HTTPS which is why HTTP+HTTPS is such a great solution for websites made by human persons for human persons. Lets be clear, corporate or institutional persons using HTTPS alone is fine and reasonable. But for human use cases HTTP+HTTPS gets you the best of both worlds. No HTTPS cert system ever survives longer than a few years without human input/maintainence. There's just too much changing and too much complexity. From the software of the user to the software of the webserver.
Which is to say, HTTP is not some "ancient" tech like an analog television. It is a modern technology used today doing things that HTTPS can't.
I'd rather have some expired cert than http
I saw once my ISP injecting javascript ads into http traffic and the horror is with me forever
Agree strongly. An expired cert is better than no cert.
Also would argue maintenance is only as complicated as you make it for yourself. Countless people keep patched, secure, https web servers running with minimal effort. If its somehow effort, introspect some on why you are somehow making so much work for yourself.
That's no use when your automated registrar stops working in 3 years because it went out of business or changed protocols. Let's Encrypt has been an outlier.
Might be a bit of each of us touching different ends of the elephant. To be clear I am talking about long timespans. Lets Encrypt hasn't even existed for a full decade yet. During that time it's dropped support entirely for the original acme protocol. During that time it's root certs have expired at least twice (only those I remember where it caused issues in older software). And that's ignoring the churn in acme/acme2 clients and specific OS/Distro cert choice issues and browser CA issues. Saying that there's no trouble with HTTPS must be coming from experiences on short timescales (ie, a few years).
HTTP/3 already doesn't allow anything but CA TLS only. It won't be too long before they no longer allow you to click through CA TLS warnings.
If human people want things to be on the web for long time periods those things should be served HTTP+HTTPS.
If you can't keep your site's certs working, I don't have much faith you can keep your server working. Maintenance is required in the face of entropy
There is some kind of middle ground here.. My first HTML file still renders like it did on Mosaic. The HTTP server I used back then still works today 35 years later without maintenance. I do agree that HTTPS is a simple solution but there is too much cargo cult around it. Honestly I do not see the use to maintain everything published if you follow sane practices.
EDIT: I have 15 year old things at work that do not compile, you have to maintain it for sure, biggest problem is cryptography. I am not sure that unstable tech should be part of the application ever.
Unless I'm misunderstanding your point, your HTTP server from 35 years ago is still working today without any maintenance? Does that mean no security patching and no updates for bugfixes? or does "no maintenance" means something else I'm missing? I find it difficult to discuss these topics when comments like these pretend that you can leave your system exposed on the internet for years without any maintenance.
If we're talking applications that don't actively listen on the internet that's fine, and I would agree that we should have complete software that just works. But a webserver, unless it's for personal/home use, it's on the internet and I don't see how it could work for 35 years without any update/change
Static html webservers don't really have any need for security patching or bugfixes constantly like dynamic complex stuff. They literally can just live forever. The sites themselves are just files. Not applications.
I hate to break it to you, but HTTP servers (what is an html server) absolutely can have all manner of fun exploits, like RCE.
On the one hand, I agree with you given that state of the world.
On the other hand, that state of the world shouldn't exist. It's incredible to me that it's not illegal.
That's when you connect the VPN...
I thought that was a one time thing in a 3rd world country blown out of proportion into myth status.
Would you mind sharing what ISP it was and what time period this was in?
I’m not sure whether this applies globally, but in Japan, around 2015, some mobile carriers deployed a “traffic optimization” feature that would lossily compress images in transit.
On the platforms of NTT Docomo and KDDI (au), users could opt out of this behavior. However, with SoftBank, it could not be disabled, which led to controversy.
As you might expect, this caused issues—since the image data was modified, the hash values changed. As a result, some game apps detected downloaded image files as corrupted and failed to load them properly.
Needless to say, this was effectively a man-in-the-middle attack, so it did not work over HTTPS.
Within a couple of years, the feature seems to have been quietly discontinued.
There were also concerns that this might violate the secrecy of communications, but at least the government authorities responsible for telecommunications did not take any concrete action against it.
There is a Japanese Wikipedia article about this: https://ja.wikipedia.org/wiki/%E9%80%9A%E4%BF%A1%E3%81%AE%E6...
This event sounds much more realistic/common, the motivation of an ISP to save bandwidth costs is much more likely/frequent than the motivation of an ISP to monetize through ads (in addition to monthly service fees).
Where as my ISP did not put in ads, they did inject messages such as maintenance was going to occur and did things like redirect bad dns to their own search.
Also ISPs were monitoring and selling browsing data years ago.
Comcast / Xfinity in the U.S., for example:
https://www.reddit.com/r/technology/comments/9b5ikd/comcastx...
Cox Communications used to do it in California to inject JS into sites. I remember seeing little Cox popup/toast messages in the corner of other sites.
it was some mobile ISP in Russia. Maybe 6 or 8 years ago
This is such a weird framing. HTTPS is HTTP. TLS is at a different layer of the network stack. You may as well say HTTP through a proxy is better or worse than HTTP through a VPN; all of those statements are equally nonsensical.
You are simply arguing that insecure network requests require less work. Which is obviously true. TLS did not appear out of nothing. Much effort was expended to create it, and there's a reason
My thoughts exactly. By this logic both are fragile because they run over lossy wireless networks.
The composability of TLS/HTTP is really a beautiful thing.
Any fans of retrocomputing will certainly agree. Much of the plain-HTTP internet that's left is there by them and for them.
But, as we learned with the telnet filter going into place, we exist on the network at the pleasure of everyone else. Their concerns must come before ours. The needs of the many outweigh the needs of the few.
That explains why I've been using this to find all the cool stuff :) https://whatsonhttp.com/votes
Agree 100%. HTTP is much more accessible, and HTTPS has more failure modes. When I want to ensure that someone can read my content, I offer both.
Using HTTP does not guarantee your content can be read, since it can be modified in transit. Your content could be replaced entirely and you would never know unless someone reported it to you.
This is true, and is a real failure mode of HTTP.
Where I live, and for people with older devices, this happens much less frequently than the HTTPS failure modes of unsupported browsers.
If you don't care about security, you could just use a browser which ignores invalid certificates.
Invalid certificates are one thing, and you can probably click through that. But maybe your older browser tops out at TLS 1.0, and servers don't offer that anymore (I think the credit card PCI cert discourages it) or maybe your older browser can't do ECC certs and the server you want to talk to only has an ECC cert.
Or maybe your older server only speaks TLS 1.0 and that's not cool anymore. Or it could only use sha1 certs, so it can't get a current cert.
When I can, I like to server http and https, and serve the favicon with HTTPS and use HSTS to induce current clients to use https for everything. Finally, a use for the favicon.
Someone with an older browser can update the browser outside of very niche situations. I have little concern for that use case.
If a server can't do TLS 1.2 from 2008 I question how it's still stable and unhacked more than anything.
Not very useful when most of the pages are default web server pages.
The author should check to see if the HTTP response body contains "nginx" or "apache" and just filter those out. Seems like at least 50% of what I'm seeing.
Also would be nice if there was a hotlink to view the original site directly from the index page.
The search page lets you add multiple exclude filters to the aggregation pipeline. So as you filter common strings, the interesting results bubble to the top.
If you click the image it should take you to an info page on the service.
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]