Discussion
Search code, repositories, users, issues, pull requests...
capitol_: Finally encrypted client hello support \o/
bombcar: Is this something that we can enable "today" or is it going to take 12 years for browsers and servers to support?
arcfour: CloudFlare has supported it since 2023: https://blog.cloudflare.com/announcing-encrypted-client-hell... Firefox has had it enabled by default since version 119: https://support.mozilla.org/en-US/kb/faq-encrypted-client-he... so you can use it today.
caycep: How is OpenSSl these days? I vaguely remember the big ruckus a while back, was it Heartbleed? where everyone to their horror realized it was maybe 1 or 2 people trying to maintain OpenSSL, and the OpenBSD people then throwing manpower at it to clear up a lot of old outstanding bugs. It seems like it is on firmer/more organized footing these days?
jmclnx: I wonder how hard it is to move from 3.x to 4.0.0 ?From what I remember hearing, the move from 2 to 3 was hard.
georgthegreat: That's because there was no version 2...
some_furry: Yes there was!But, thousand yard stare it was the version for the FIPS patches to 1.0.2.
bensyverson: I just updated to 3.5x to get pq support. Anything that might tempt me to upgrade to 4.0?
altairprime: [delayed]
ocdtrekkie: Just be aware any reasonable network will block this.
quantummagic: Why is it "reasonable" to block it?
vman81: Well, I may want to have a say in what websites the employees at work access in their browsers. For example.
kccqzy: [delayed]
altairprime: [delayed]
kccqzy: It’s still terrible. There was a brief period immediately after Heartbleed that it was rapidly improving but the entire OpenSSL 3 was a huge disappointment to anyone who cared about performance and complexity and developer experience (ergonomics). Core operations in OpenSSL 3 are still much much slower than in OpenSSL 1.1.1.The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...Here are some juicy quotes:> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.
gavinray: > In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. Ah yes, the ole' " fn(args: Map<String, Any>)" approach. Highly auditable, and Very Safe.
wahern: I think one of the main motivators was supporting the new module framework that replaced engines. The FIPS module specifically is OpenSSL's gravy train, and at the time the FIPS certification and compliance mandate effectively required the ability to maintaining ABI compatibility of a compiled FIPS module across multiple major OpenSSL releases, so you could easily upgrade OpenSSL for bug fixes and otherwise stay current. But OpenSSL also didn't want that ability to inhibit evolution of its internal and external APIs and ABIs. Though, while the binary certification issue nominally remains, there's much more wiggle room when it comes to compliance and auditing, often allowing you to maintain compliance when using modules built updates source of previously certified module, and which is in the pipeline for re-certification in its updated form. So the the ABI compatibility dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape.
hypeatei: Procrastinators. FTFY.Eventually these blocks won't be viable when big sites only support ECH. It's a stopgap solution that's delaying the inevitable death of SNI filtering.
ocdtrekkie: This will never happen. Because between enterprise networks and countries with laws, ECH will end up blocked a lot of places.Big sites care about money more than your privacy, and forcing ECH is bad business.And sure, kill SNI filtering, most places that block ECH will be happy to require DPI instead, while you're busy shooting yourself in the foot. I don't want to see all of the data you transmit to every web provider over my networks, but if you remove SNI, I really don't have another option.