upvote
The security side of OpenSSL improved significantly since Heartbleed, which was a galvanizing moment for the maintenance practices of the project. It doesn't hurt that OpenSSL is now one of the most actively researched software security targets on the Internet.

The software quality side of OpenSSL paradoxically probably regressed since Heartbleed: there's a rough consensus that the design of OpenSSL 3.0 was a major step backwards, not least for performance, and more than one large project (but most notably pyca/cryptography) is actively considering moving away from OpenSSL entirely as a result. Again: while security concerns might be an ancillary issue in those potential migrations, the core issue is just that OpenSSL sucks to work with now.

reply
On this topic, there was a great episode of a little-known podcast about Python cryptography and OpenSSL that was really eye opening: https://securitycryptographywhatever.buzzsprout.com/1822302/...

:)

reply
I dunno, they'll let anybody get on the Internet and start a podcast.
reply
> ... the core issue is just that OpenSSL sucks to work with now.

NodeJS working group don't seem happy working with OpenSSL, either. There's been indication Node may move off of it (though, I remain sceptical):

  I'd actually like us to consider the possibility of switching entirely to BoringSSL and away from OpenSSL. While BoringSSL does not carry the same Long Term Support guarantees that OpenSSL does, and has a much more constrained set of algorithms/options -- meaning it would absolutely be a breaking change -- the model they follow echoes that approach that v8 takes and we've been able to deal with that just fine.
Update on QUIC, https://github.com/nodejs/node/issues/57281 (2025).
reply
It’s still terrible. There was a brief period immediately after Heartbleed that it was rapidly improving but the entire OpenSSL 3 was a huge disappointment to anyone who cared about performance and complexity and developer experience (ergonomics). Core operations in OpenSSL 3 are still much much slower than in OpenSSL 1.1.1.

The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...

Here are some juicy quotes:

> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.

> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.

> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.

reply
Wow, also this:

> The OpenSSL project does not sufficiently prioritize testing. [... ]the project was [...] reliant on the community to report regressions experienced during the extended alpha and beta period [...], because their own tests were insufficient to catch unintended real-world breakages. Despite the known gaps in OpenSSL’s test coverage, it’s still common for bug fixes to land without an accompanying regression test.

I don't know anything about these libraries, but this makes their process sound pretty bad.

reply
There are little other options. `Ring` is not for production use. WolfSSL lags behind in features a bit. BoringSSL and AWS-LC are the best we have.
reply

  > In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. 
Ah yes, the ole' " fn(args: Map<String, Any>)" approach. Highly auditable, and Very Safe.
reply
I think one of the main motivators was supporting the new module framework that replaced engines. The FIPS module specifically is OpenSSL's gravy train, and at the time the FIPS certification and compliance mandate effectively required the ability to maintain ABI compatibility of a compiled FIPS module across multiple major OpenSSL releases, so end users could easily upgrade OpenSSL for bug fixes and otherwise stay current. But OpenSSL also didn't want that ability to inhibit evolution of its internal and external APIs and ABIs.

Though, while the binary certification issue nominally remains, there's much more wiggle room today when it comes to compliance and auditing. You can typically maintain compliance when using modules built from updated sources of a previously certified module, and which are in the pipeline for re-certification. So the ABI dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape. Today, like with Go, you can lean on process, i.e. constant cycling of the implementation through the certification pipeline, more than technical solutions. The real unforced error was committing to OSSL_PARAMs for the public application APIs, letting the backend design choices (flexibility, etc) bleed through to the frontend. The temptation is understandable, but the ergonomics are horrible. I think performance problems are less a consequence of OSSL_PARAMS, per se, but about the architecture of state management between the library and module contexts.

reply
Fair, but from the user side it still hurts. Setting up an Ed25519 signing context used to be maybe ten lines. Now you're constructing OSSL_PARAM arrays, looking up providers by string name, and hoping you got the key type right because nothing checks at compile time.
reply
Yeah. Some of the more complex EVP interfaces from before and around the time of the forks had design flaws, and with PQC that problem is only going to grow. Capturing the semantics of complex modes is difficult, and maybe that figured into motivations. But OSSL_PARAMs on the frontend feels more like a punt than a solution, and to maintain API compatibility you still end up with all the same cruft in both the library and application, it's just more opaque and confusing figuring out which textual parameter names to use and not use, when to refactor, etc. You can't tag a string parameter key with __attribute__((deprecated)). With the module interface decoupled, and faster release cadence, exploring and iterating more strongly typed and structured EVP interfaces should be easier, I would think. That's what the forks seem to do. There are incompatibilities across BoringSSL, libressl, etc, but also cross pollination and communication, and over time interfaces are refined and unified.
reply
Sensible way would be dropping FIPS security threathre entirely and let it rot in the stupid corner companies dug themselves into, but of course the problem is OpenSSL's main income source...

I really wish Linux Foundation or some other big OSS founded complete replacement of it, then just write a shim that translates ABI calls from this to openssl 1.1 lookalike

reply