Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Modern implementations must support a range of TLS protocol versions (from legacy TLS 1.0 to current TLS 1.3)

So this statement is strange considering "modern" security standards either nudge you (or demand) to deprecate anything that isn't v1.3 or v1.2.

If the implementation is "modern" why would I allow 1.0 ?

This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.

It would have been cool to see AWS's s2n-tls (or s2n-quic https://github.com/aws/s2n-quic) included in their benchmark.

One of my all time favorite episode from the SCW podcast goes into the design decisions of s2n:

The feeling's mutual: mTLS with Colm MacCárthaigh https://securitycryptographywhatever.com/2021/12/29/the-feel...

From AWS: https://aws.amazon.com/security/opensource/cryptography/

> "In 2015, AWS introduced s2n-tls, a fast open source implementation of the TLS protocol. The name "s2n", or "signal to noise," refers to the way encryption masks meaningful signals behind a facade of seemingly random noise. Since then, AWS has launched several other open source cryptographic libraries, including Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto (AWS-LC). AWS believes that open source benefits everyone, and we are committed to expanding our cryptographic and transport libraries to meet the evolving security needs of our customers."

Here is a pdf that provides some performance results for s2n (sadly not s2n-quic):

"Performance Analysis of SSL/TLS Crypto Libraries: Based on Operating Platform" https://bhu.ac.in/research_pub/jsr/Volumes/JSR_66_02_2022/12...



> If the implementation is "modern" why would I allow 1.0 ?

Because there's a distinction between "using" (especially by default) and "implementing".

The real world has millions (billions?) of devices that don't receive updates and yet still need to be talked to, in most cases luckily by a small set of communication partners. Would you rather have even the "modern" side of that conversation be forced to use some ancient SSL library? I'd rather have modern software, even if I'm forced to use an older protocol version by the other endpoint. Just disable it by default.

And it's not like TLS 1.0 and 1.1 are somehow worse than cleartext communication. They're still encrypted transport protocols that take significant effort to break. That you shouldn't use them if at all possible doesn't mean that you can't use them if anything else is impossible.


Exposing TLS 1.0 leaves your connections vulnerable to BEAST. Requiring TLS 1.2 deprecates clients older than what, Android 4.4.2 and Safari 9? Maybe exceptional cases like IoT crapware and fifteen year old smart phones you might still need 1.1? I don't see why you'd want to take on the additional work and risk otherwise. In practice TLS 1.2 has been available for long enough that it should be the bare minimum at this point.

If I were to implement a TLS server today, I'd start at 1.2, and not bother with anything older. All of the edge cases, ciphers, protocols, config files, and regression tests are wasted time and effort.


BEAST is AFIR mitigated by RC4. It is vulnerable too but an attack on RC4 requires significant traffic never sent by many clients. Everything is a tradeoff and denying service to old clients sometimes worse than to introduce a small risk of TLS not being able to stop MitM.


> Exposing TLS 1.0 leaves your connections vulnerable to BEAST.

So?

> Requiring TLS 1.2 deprecates clients older than what, Android 4.4.2 and Safari 9? Maybe exceptional cases like IoT crapware and fifteen year old smart phones you might still need 1.1?

You're underestimating the amount of "IoT crapware" out there. And industrial control systems. And other early internet-ified infrastructure.

Even bringing up Android and Safari hints that you're not thinking in the same direction I am. I'm concerned about RTEMS, FreeRTOS, Zephyr, and oooooold versions of mbedTLS or wolfSSL.

These systems were built using "stable" versions. What do you think was stable 10-15 years ago? That's 20 year old software. I'm happy if it's TLS and not SSL, my dear friend.


> And it's not like TLS 1.0 and 1.1 are somehow worse than cleartext communication.

In reality humans can't actually do this nuance you're imagining, and so what happens is you're asked "is this secure?" and you say "Yes" meaning "Well it's not cleartext" and then it gets ripped wide open.

HTTP in particular is like a dream protocol for cryptanalysis. If in 1990 you told researchers to imagine a protocol where clients will execute arbitrary numbers of requests to any server under somebody else's control (Javascript) and where values you control are concatenated with secrets you want to steal (Cookies) they'd say that's nice for writing examples, but nobody would deploy such an amateur design in the real world. They would be dead wrong.

But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.


> In reality humans can't actually do this nuance […]

Luckily the cases that need this aren't normally about a wide user base, rather they only concern a bunch of developers and admins. Which is why I pointed out the default-off nature of this.

> But eh, we wrote an RFC telling you not to use these long obsolete protocol versions, and you're going to do it anyway, so, whatever.

You're losing audience with unnecessary hostility. Your post would've been much more effective with plain omitting this last paragraph.


Just to be clear, we don't care at all about performance of 1.0. The tests resulting in the pretty telling graphs were done in 1.3 only, as that's what users care about.


AFAIK haproxy does not charge their users increased fees for legacy TLS.

You would be shocked how much legacy software there is, requiring TLS 1.0. Not saying that is a good thing, just a reality…


> This seems like a HA-Proxy problem. They ought to maintain support for geriatric TLS versions on a dedicated release branch connected to a support-model that nudges their client into updating by increasing their fees for maintaining that system. Not doing so means the vendor is part of the problem why we have slower adoption rates for 1.3 than we could otherwise have.

If I understand what you're suggesting, it's that HAProxy should have their current public releases support only TLS 1.2 and 1.3, and a paid release that supports TLS 1.0-1.3 and that would encourge adoption of 1.3?

I would expect those users who have a requirement for TLS 1.0 to stay on an old public free release that supports TLS 1.0-1.2 in that case. If upgrading to support 1.3 would mean dropping a requirement or paying money, who would do it? How does that increase adoption vs making it available with all the other versions in the free release? Some people might reevaluate their requirements given the choices, but if anything that pushes abandonment of TLS 1.0 more than adoption of TLS 1.3.

I no longer have to support this kind of thing, but when you require dropping the old thing at the same time as supporting the new thing, you're forcing them to choose, and unless the choice is very clear, you'll have a large group of people that pick to support the old thing. IMHO, the differences between TLS 1.0,1.1, and 1.2 aren't so big that you can claim it's too hard to support them all, and dropping support for 1.0 and 1.1 on the server doesn't gain much security. 1.2 to 1.3 is a bigger change, if you wanted to only support 1.3, that's an argument to have, but I don't think that's a realistic position for a general purpose proxy at this point in time (it would certainly be a realistic configuration option though).


Actually this is a problem for anyone that either: - needs to rely on LTS versions - runs multi-threaded software (like HAProxy) - has real performance and scalability needs.

Note on #2 above that there are other LB/RP projects that don't have a problem here because they chose to be single threaded. This means their performance is not greatly impacted.

HAproxy is incredibly performant because the project chooses to prioritize performance. Also, as an open source project, we should applaud the efforts of the team to provide the best product possible and not just push everything of value into the commercial offering.

Thats pretty rare these days.


HAProxy used to support multiprocess and multithread. When performance needs exceed a single cpu, you do need to do one or both of those. But a lot of HAProxy users need to share state between workers and that's a lot easier in the threaded environment.

When I had a need for HAProxy at high throughput, I ran multiprocess, and it was tricky to make sure the processes didn't try to use the same outbound addresses, among some other challenges I had to address to get to the connection numbers I thought were reasonable. I can understand why they would have chosen to go with threads only in 2.5 though.

If I was doing the same thing today, hopefully enough has changed in my chosen OS that threads would work for me, but if not, it shouldn't be too hard for me to spawn the right number of haproxy threads with individual configs to get what I need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: