Nginx mainline 1.29.x supports it.
So once you get that and also the openssl version on your system, good to go.
Likely too late for ubuntu 26.04, maybe in debian 14 next year, or of course rolling release distros / containers.
But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there. The real benefits are on huge cloud hosting platforms.
FWIW Nginx 1.30 [1] just released and supports it so most distributions will have support as soon as those responsible for builds and testing builds push it forward.
"Nginx 1.30 incorporates all of the changes from the Nginx 1.29.x mainline branch to provide a lot of new functionality like Multipath TCP (MPTCP)."
"Nginx 1.30 also adds HTTP/2 to backend and Encrypted Client Hello (ECH), sticky sessions support for upstreams, and the default proxy HTTP version being set to HTTP/1.1 with Keep-Alive enabled."
But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there
I don't quite follow. I have dozens of throw-away silly hobby domains. I can use any of them as the outer-SNI. How is someone observing the traffic going to know the inner-SNI domain unless someone builds a massive database of all known inner+outer combinations which can be changed on a whim? ECH requires DOH so unless the ISP has tricked the user into using their DOH end-point they can't see the HTTPS resource record.
That’s likely due to iOS/macOS not supporting it in production-default-enabled yet; there’s an experimental opt-in flag at the OS level, but Safari apparently hasn’t (yet) added a dev feature switch for it.
Presumably anyone besides Safari can opt-in to that testing today, but I wouldn’t ship it worldwide and expect nice outcomes until (I suspect) after this fall’s 27 releases. Maybe someone could PR the WebKit team to add that feature flag in the meantime?
TLS (the IETF Working Group not the protocol family named for them) have long experience with the fact that if you specify how B is compatible with A based on how you specified A and ship B what you did won't work because the middleboxes are all cost optimized and don't implement what you specified but instead whatever got the sale for the least investment.
So e.g. they'd work for exactly the way you use say TLS 1.0 in the Netscape 4 web browser which was popular when the middlebox was first marketed, or maybe they cope with exactly the features used in Safari but since Safari never sets this bit flag here they reject all connections with that flag.
What TLS learned is summarized as "have one joint and keep it well oiled" and they invented a technique to provide that oiling for one working joint in TLS, GREASE, Generate Random Extensions And Sustain Extensibility. The idea of GREASE is, if a popular client (say, the Chrome web browser) just insists on uttering random nonsense extensions then to survive in the world where that happens you must not freak out when there are extensions you do not understand. If your middlebox firmware freaks out when seeing this happen, your customers say "This middlebox I bought last week is broken, I want my money back" so you have to spend a few cents more to never do that.
But, since random nonsense is now OK, we can ship a new feature and the middleboxes won't freak out, so long as our feature looks similar enough to GREASE.
ECH achieves the same idea, when a participating client connects to a server which does not support ECH as far as it knows, it acts exactly the same as it would for ECH except, since it has neither a "real" name to hide nor a key to encrypt that name it fills the space where those would fit with random gibberish. As a server, you get this ECH extension you don't understand, and it is filled with random gibberish you also don't understand, this seems fine because you didn't understand any of it (or maybe you've switched it off, either way it's not relevant to you).
But for a middlebox this ensures they can't tell whether you're doing ECH. So, either they reject every client which could do ECH, which again that's how you get a bunch of angry customers, or, they accept such clients and so ECH works.
Just be aware any reasonable network will block this.
Russia blocked it for Cloudflare because the outer SNI was obviously just for ECH but that won't stop anyone from using generic or throw-away domains as the outer SNI. As for reasonable I don't quite follow. Only censorious countries or ISP's would do such a thing.
I can foresee Firewall vendors possibly adding a category for known outer-SNI domains used for ECH but at some point that list would be quite cumbersome and may run into the same problems as blocking CDN IP addresses.
That’s not a meaningful issue here. Either snoop competently or snoop wire traffic, pick one.
In the snooping-mandatory scenario, either you have a mandatory outbound PAC with SSL-terminating proxy that either refuses CONNECT traffic or only allows that which it can root CA mitm, or you have a self-signed root CA mitm’ing all encrypted connections it recognizes. The former will continue functioning just fine with no issues at providing that; the latter will likely already be having issues with certificate-pinned apps and operating system components, not to mention likely being completely unaware of 80/udp, and should be scheduled for replacement by a solution that’s actually effective during your next capital budgeting interval.
A good solution is tackling it on both. At work we have network level firewalls with separate policies for internal and guest networks, and our managed PCs sync a filter policy as well (through primarily for when those devices are not on our network). The network level is more efficient, easier to manage and troubleshoot, and works on appliances, rogue hardware, and other things that happen not to have client management.
Eventually these blocks won't be viable when big sites only support ECH. It's a stopgap solution that's delaying the inevitable death of SNI filtering.
This will never happen. Because between enterprise networks and countries with laws, ECH will end up blocked a lot of places.
Big sites care about money more than your privacy, and forcing ECH is bad business.
And sure, kill SNI filtering, most places that block ECH will be happy to require DPI instead, while you're busy shooting yourself in the foot. I don't want to see all of the data you transmit to every web provider over my networks, but if you remove SNI, I really don't have another option.
Enterprises own the device that I'm connected to the network with, I don't see how you can get any more invasive than that.
> countries with laws
1) what countries do national-level SNI filtering, and 2) why are you using a hyptothetical authoritarian, privacy invading state actor as a good reason to keep plaintext SNI?
> Big sites care about money
Yes, and you could say that overbearing, antiquated network operators stop them from making more money with things like SNI filtering.
Nice that OpenSSL finally relented and provided an API for developers to use to implement QUIC support - last year, apparently.
For those not familiar: until OpenSSL 3.4.1, if you wanted use OpenSSL and wanted to implement HTTP/3, which uses QUIC as the underlying protocol, you had to use their entire QUIC stack; you couldn't have a QUIC implementation and only use OpenSSL for the encryption parts.
QUIC, for those not familiar, is basically "what if we re-implemented TCP's functionality on top of UDP, but we could throw out all the old legacy crap". Complicated but interesting, except that if OpenSSL's implementation didn't do what you want or didn't do it well, you either had to put up with it or go use some other SSL library somewhere else. That meant that if you were using e.g. curl built against OpenSSL then curl also inherently had to use OpenSSL's QUIC implementation even if there were better ones available.
How is OpenSSl these days? I vaguely remember the big ruckus a while back, was it Heartbleed? where everyone to their horror realized it was maybe 1 or 2 people trying to maintain OpenSSL, and the OpenBSD people then throwing manpower at it to clear up a lot of old outstanding bugs. It seems like it is on firmer/more organized footing these days?
The security side of OpenSSL improved significantly since Heartbleed, which was a galvanizing moment for the maintenance practices of the project. It doesn't hurt that OpenSSL is now one of the most actively researched software security targets on the Internet.
The software quality side of OpenSSL paradoxically probably regressed since Heartbleed: there's a rough consensus that the design of OpenSSL 3.0 was a major step backwards, not least for performance, and more than one large project (but most notably pyca/cryptography) is actively considering moving away from OpenSSL entirely as a result. Again: while security concerns might be an ancillary issue in those potential migrations, the core issue is just that OpenSSL sucks to work with now.
It’s still terrible. There was a brief period immediately after Heartbleed that it was rapidly improving but the entire OpenSSL 3 was a huge disappointment to anyone who cared about performance and complexity and developer experience (ergonomics). Core operations in OpenSSL 3 are still much much slower than in OpenSSL 1.1.1.
> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.
> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.
> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.
I think one of the main motivators was supporting the new module framework that replaced engines. The FIPS module specifically is OpenSSL's gravy train, and at the time the FIPS certification and compliance mandate effectively required the ability to maintain ABI compatibility of a compiled FIPS module across multiple major OpenSSL releases, so end users could easily upgrade OpenSSL for bug fixes and otherwise stay current. But OpenSSL also didn't want that ability to inhibit evolution of its internal and external APIs and ABIs.
Though, while the binary certification issue nominally remains, there's much more wiggle room today when it comes to compliance and auditing. You can typically maintain compliance when using modules built from updated sources of a previously certified module, and which are in the pipeline for re-certification. So the ABI dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape. Today, like with Go, you can lean on process, i.e. constant cycling of the implementation through the certification pipeline, more than technical solutions. The real unforced error was committing to OSSL_PARAMs for the public application APIs, letting the backend design choices (flexibility, etc) bleed through to the frontend. The temptation is understandable, but the ergonomics are horrible. I think performance problems are less a consequence of OSSL_PARAMS, per se, but about the architecture of state management between the library and module contexts.
Compared to OpenSSL 3 this transition has been very smooth. Only dropping of "Engines" was a problem at all, and in Fedora most of those dependencies have been changed.
The top feature, “ Support for Encrypted Client Hello (ECH, RFC 9849)”, is of prime importance to those operating Internet-accessible servers, or clients; hopefully your Postgres server is not one such!
Finally encrypted client hello support \o/
Is this something that we can enable "today" or is it going to take 12 years for browsers and servers to support?
Even if the browsers and servers don't support it, you could still enable it because the system is designed to be backward compatible.
Nginx mainline 1.29.x supports it. So once you get that and also the openssl version on your system, good to go. Likely too late for ubuntu 26.04, maybe in debian 14 next year, or of course rolling release distros / containers.
But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there. The real benefits are on huge cloud hosting platforms.
FWIW Nginx 1.30 [1] just released and supports it so most distributions will have support as soon as those responsible for builds and testing builds push it forward.
"Nginx 1.30 incorporates all of the changes from the Nginx 1.29.x mainline branch to provide a lot of new functionality like Multipath TCP (MPTCP)."
"Nginx 1.30 also adds HTTP/2 to backend and Encrypted Client Hello (ECH), sticky sessions support for upstreams, and the default proxy HTTP version being set to HTTP/1.1 with Keep-Alive enabled."
But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there
I don't quite follow. I have dozens of throw-away silly hobby domains. I can use any of them as the outer-SNI. How is someone observing the traffic going to know the inner-SNI domain unless someone builds a massive database of all known inner+outer combinations which can be changed on a whim? ECH requires DOH so unless the ISP has tricked the user into using their DOH end-point they can't see the HTTPS resource record.
[1] - https://news.ycombinator.com/item?id=47770007
CloudFlare has supported it since 2023: https://blog.cloudflare.com/announcing-encrypted-client-hell... Firefox has had it enabled by default since version 119: https://support.mozilla.org/en-US/kb/faq-encrypted-client-he... so you can use it today.
https://tls-ech.dev indicates that Safari doesn't support it, but Chrome does.
That’s likely due to iOS/macOS not supporting it in production-default-enabled yet; there’s an experimental opt-in flag at the OS level, but Safari apparently hasn’t (yet) added a dev feature switch for it.
https://developer.apple.com/documentation/security/sec_proto...
Presumably anyone besides Safari can opt-in to that testing today, but I wouldn’t ship it worldwide and expect nice outcomes until (I suspect) after this fall’s 27 releases. Maybe someone could PR the WebKit team to add that feature flag in the meantime?
TLS (the IETF Working Group not the protocol family named for them) have long experience with the fact that if you specify how B is compatible with A based on how you specified A and ship B what you did won't work because the middleboxes are all cost optimized and don't implement what you specified but instead whatever got the sale for the least investment.
So e.g. they'd work for exactly the way you use say TLS 1.0 in the Netscape 4 web browser which was popular when the middlebox was first marketed, or maybe they cope with exactly the features used in Safari but since Safari never sets this bit flag here they reject all connections with that flag.
What TLS learned is summarized as "have one joint and keep it well oiled" and they invented a technique to provide that oiling for one working joint in TLS, GREASE, Generate Random Extensions And Sustain Extensibility. The idea of GREASE is, if a popular client (say, the Chrome web browser) just insists on uttering random nonsense extensions then to survive in the world where that happens you must not freak out when there are extensions you do not understand. If your middlebox firmware freaks out when seeing this happen, your customers say "This middlebox I bought last week is broken, I want my money back" so you have to spend a few cents more to never do that.
But, since random nonsense is now OK, we can ship a new feature and the middleboxes won't freak out, so long as our feature looks similar enough to GREASE.
ECH achieves the same idea, when a participating client connects to a server which does not support ECH as far as it knows, it acts exactly the same as it would for ECH except, since it has neither a "real" name to hide nor a key to encrypt that name it fills the space where those would fit with random gibberish. As a server, you get this ECH extension you don't understand, and it is filled with random gibberish you also don't understand, this seems fine because you didn't understand any of it (or maybe you've switched it off, either way it's not relevant to you).
But for a middlebox this ensures they can't tell whether you're doing ECH. So, either they reject every client which could do ECH, which again that's how you get a bunch of angry customers, or, they accept such clients and so ECH works.
Just be aware any reasonable network will block this.
Just be aware any reasonable network will block this.
Russia blocked it for Cloudflare because the outer SNI was obviously just for ECH but that won't stop anyone from using generic or throw-away domains as the outer SNI. As for reasonable I don't quite follow. Only censorious countries or ISP's would do such a thing.
I can foresee Firewall vendors possibly adding a category for known outer-SNI domains used for ECH but at some point that list would be quite cumbersome and may run into the same problems as blocking CDN IP addresses.
Once upon a time, "reasonable networks" blocked ICMP, too.
They were wrong then, of course, and they're still wrong now.
Once upon a time, like today? ICMP is most definitely only allowed situationally through firewalls today.
Why is it "reasonable" to block it?
Well, I may want to have a say in what websites the employees at work access in their browsers. For example.
That’s not a meaningful issue here. Either snoop competently or snoop wire traffic, pick one.
In the snooping-mandatory scenario, either you have a mandatory outbound PAC with SSL-terminating proxy that either refuses CONNECT traffic or only allows that which it can root CA mitm, or you have a self-signed root CA mitm’ing all encrypted connections it recognizes. The former will continue functioning just fine with no issues at providing that; the latter will likely already be having issues with certificate-pinned apps and operating system components, not to mention likely being completely unaware of 80/udp, and should be scheduled for replacement by a solution that’s actually effective during your next capital budgeting interval.
That’s usually done not on the network side but through the device itself. Think MDM and endpoint management.
A good solution is tackling it on both. At work we have network level firewalls with separate policies for internal and guest networks, and our managed PCs sync a filter policy as well (through primarily for when those devices are not on our network). The network level is more efficient, easier to manage and troubleshoot, and works on appliances, rogue hardware, and other things that happen not to have client management.
Well, if you have MDM you should be able to just disable ECH.
This is also indeed done on both. Browser policies.
Procrastinators. FTFY.
Eventually these blocks won't be viable when big sites only support ECH. It's a stopgap solution that's delaying the inevitable death of SNI filtering.
This will never happen. Because between enterprise networks and countries with laws, ECH will end up blocked a lot of places.
Big sites care about money more than your privacy, and forcing ECH is bad business.
And sure, kill SNI filtering, most places that block ECH will be happy to require DPI instead, while you're busy shooting yourself in the foot. I don't want to see all of the data you transmit to every web provider over my networks, but if you remove SNI, I really don't have another option.
> Because between enterprise networks
> require DPI
Enterprises own the device that I'm connected to the network with, I don't see how you can get any more invasive than that.
> countries with laws
1) what countries do national-level SNI filtering, and 2) why are you using a hyptothetical authoritarian, privacy invading state actor as a good reason to keep plaintext SNI?
> Big sites care about money
Yes, and you could say that overbearing, antiquated network operators stop them from making more money with things like SNI filtering.
https://www.haproxy.com/blog/state-of-ssl-stacks
According to this one should not be using v3 at all..
Nice that OpenSSL finally relented and provided an API for developers to use to implement QUIC support - last year, apparently.
For those not familiar: until OpenSSL 3.4.1, if you wanted use OpenSSL and wanted to implement HTTP/3, which uses QUIC as the underlying protocol, you had to use their entire QUIC stack; you couldn't have a QUIC implementation and only use OpenSSL for the encryption parts.
QUIC, for those not familiar, is basically "what if we re-implemented TCP's functionality on top of UDP, but we could throw out all the old legacy crap". Complicated but interesting, except that if OpenSSL's implementation didn't do what you want or didn't do it well, you either had to put up with it or go use some other SSL library somewhere else. That meant that if you were using e.g. curl built against OpenSSL then curl also inherently had to use OpenSSL's QUIC implementation even if there were better ones available.
Daniel Stenberg from Curl wrote a great blog post about how bad and dumb that was if anyone is interested. https://daniel.haxx.se/blog/2026/01/17/more-http-3-focus-one...
Mythos is coming for yaaaaa (just kidding).
How is OpenSSl these days? I vaguely remember the big ruckus a while back, was it Heartbleed? where everyone to their horror realized it was maybe 1 or 2 people trying to maintain OpenSSL, and the OpenBSD people then throwing manpower at it to clear up a lot of old outstanding bugs. It seems like it is on firmer/more organized footing these days?
The security side of OpenSSL improved significantly since Heartbleed, which was a galvanizing moment for the maintenance practices of the project. It doesn't hurt that OpenSSL is now one of the most actively researched software security targets on the Internet.
The software quality side of OpenSSL paradoxically probably regressed since Heartbleed: there's a rough consensus that the design of OpenSSL 3.0 was a major step backwards, not least for performance, and more than one large project (but most notably pyca/cryptography) is actively considering moving away from OpenSSL entirely as a result. Again: while security concerns might be an ancillary issue in those potential migrations, the core issue is just that OpenSSL sucks to work with now.
On this topic, there was a great episode of a little-known podcast about Python cryptography and OpenSSL that was really eye opening: https://securitycryptographywhatever.buzzsprout.com/1822302/...
:)
It’s still terrible. There was a brief period immediately after Heartbleed that it was rapidly improving but the entire OpenSSL 3 was a huge disappointment to anyone who cared about performance and complexity and developer experience (ergonomics). Core operations in OpenSSL 3 are still much much slower than in OpenSSL 1.1.1.
The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...
Here are some juicy quotes:
> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.
> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.
> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.
I think one of the main motivators was supporting the new module framework that replaced engines. The FIPS module specifically is OpenSSL's gravy train, and at the time the FIPS certification and compliance mandate effectively required the ability to maintain ABI compatibility of a compiled FIPS module across multiple major OpenSSL releases, so end users could easily upgrade OpenSSL for bug fixes and otherwise stay current. But OpenSSL also didn't want that ability to inhibit evolution of its internal and external APIs and ABIs.
Though, while the binary certification issue nominally remains, there's much more wiggle room today when it comes to compliance and auditing. You can typically maintain compliance when using modules built from updated sources of a previously certified module, and which are in the pipeline for re-certification. So the ABI dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape. Today, like with Go, you can lean on process, i.e. constant cycling of the implementation through the certification pipeline, more than technical solutions. The real unforced error was committing to OSSL_PARAMs for the public application APIs, letting the backend design choices (flexibility, etc) bleed through to the frontend. The temptation is understandable, but the ergonomics are horrible. I think performance problems are less a consequence of OSSL_PARAMS, per se, but about the architecture of state management between the library and module contexts.
As a complete non-expert:
On the one hand, looks like decent cleanup. (IIRC, engines in particular will not be missed).
On the other hand, breaking compatibility is always a tradeoff, and I still remember 3.x being... not universally loved.
That's why it is version 4.
Compared to OpenSSL 3 this transition has been very smooth. Only dropping of "Engines" was a problem at all, and in Fedora most of those dependencies have been changed.
I just updated to 3.5x to get pq support. Anything that might tempt me to upgrade to 4.0?
The top feature, “ Support for Encrypted Client Hello (ECH, RFC 9849)”, is of prime importance to those operating Internet-accessible servers, or clients; hopefully your Postgres server is not one such!
I wonder how hard it is to move from 3.x to 4.0.0 ?
From what I remember hearing, the move from 2 to 3 was hard.
That's because there was no version 2...
Yes there was!
But, thousand yard stare it was the version for the FIPS patches to 1.0.2.
Just in time for the suckerpinch video