Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many times does this have to happen before we start rejecting components written in unsafe languages?


The start has happened, but it's happening unevenly with different users. Lots of TLS servers (in reverse proxies but also in backend apps and even Apache httpd[1]) are in memory safe languages.

For example Go, while not totally memory safe, has shipped their Go-implemented crypto/tls library for a long time. (And also had some crypto correctness bugs - reminding us that memory safety is "necessary but not sufficient" for a TLS implementation)

[1] https://www.memorysafety.org/blog/memory-safe-httpd/


Speaking of go, it looks they will be patching this as well.

https://groups.google.com/g/golang-announce/c/dRtDK7WS78g


Yep, this is interesting, I wonder what we can deduce about the nature of the bug from the fact that 2 separate implementations, one that's mostly memory safe, are impacted. (Of course Go didn't announce it's about the same thing, so it might be random, or might be some security research that found different bugs)

Might it be a crypto bug, or logic bug (eg in x.509)? Is there code that's used by both OpenSSL and Go (eg assembly implementations of algorithms both imported or modeled after a reference)?


In the linked article, Global sign says they don't know what exactly the vulnerability is. I imagine the public root CAs to be informed if this was an x.509 related bug. But again, Global Sign is a pretty shitty CA to begin with, so I wouldn't be surprised if they were not informed intentionally.


I'm not sure why CAs would be invited to the embargo, they're in the business of signing certs and while they do process untrusted certs so do zillions of other cert using folks.


Just an speculation; for an x.509/web of trust related vulnerability, I expect the CAs to be a prominent target. There are hundreds of them, and I'm pretty sure there are at least a few of them that use OpenSSL somewhere in their certificate issuing process. Just to avoid DigiNotar-like fiascos revoking certificates en-masse, it probably makes sense to give a head-start to CAs.


Filippo Valsorda(FiloSottile), who is the maintainer of Go's cryptography libraries said on Go's slack channel that the patch is unrelated.

See screenshot of the slack conversation: https://paste.pics/f5622033ae711b36e0bbcda393a67866


Well, Go isn't really a great language for safety either. Memory safety maybe yes, but not in general. Rust or Zig do much better here.


They don't have GC so they either make programs difficult to write (Rust) which hinders delivering secure replacements, or have use-after-free security problems (Zig) [1].

Use a GC when you can, it's the biggest programming productivity and quality improvement in PLT of the last 60+ years.

[1] Though I know Zig has some interesting mitigations, some used and some under research: https://news.ycombinator.com/item?id=31853964 https://lobste.rs/s/v5y4jb/how_safe_is_zig#c_vddk9j


Rust doesn't have GC, but it has very good automatic memory management. GC or memory management doesn't make programs immune to buffer overflows, which is the most common security vulnerability these days, while use-after-free is at 4th place.


What do you mean by automatic memory management here?

(I misspoke a bit with "Rust doesn't have GC", it does have opt in basic GC in the form of ref counting, but it's not used much because a headline feature of Rust is code without GC and I guess libs with interfaces requiring GC would be considered uncool)


Sure, I agree - I don't think what you said contradicts my point.


Modern automatic memory management techniques like RAII and ARC largely render GC obsolete.


Reference counting is hardly modern, and is a GC algorithm.


I'm not a fan of Go but it is memory safe (with a very minor exception[1]), Zig isn't (it will likely end up safer than C, but it will be “modern c++”-safe but not memory safe.

[1]: there can be memory safety issues in the presence of data races, but this has ever been proven exploitable, doesn't cause the compiler to completely miscompile and is very rare in practice, so it's not comparable to unsafe memory languages.


> this has ever been proven exploitable

I agree (typo exploit!)

See eg https://blog.stalkr.net/2015/04/golang-data-races-to-break-m... & https://blog.stalkr.net/2022/01/universal-go-exploit-using-d...

(And also people shouldn't take "nobody developed an exploit for this vulnerability yet" as any kind of strong evidence, attacks techniques always get better, never worse, over time etc - crypto algorithm people have it right when they start bracing for impact quite early after signs of a theoretical break)


Oh, I wasn't aware of the second blog post, thanks.


What makes Zig safer than Go, to your mind?


Features such as sum types (enums) that you can pattern match on. Or generics (well now Go got them as well, for a reason).

Maybe it doesn't immediately sound as if this is related to security, but it is. If it is hard to model your data and hard to work with it, then people will go the "easy and fast" path.

Think Java: for each type you have to create a new file. Even with modern tooling that is still annoying. So people often shortcut and just use "String". Now you have "String password" and "String userid" and you can swap it up and print the password by accident. Artificial example, I know, but I hope it explains what I mean in general.


> Think Java: for each type you have to create a new file.

That's untrue, Java has inner classes, and they can even be public. A "public static" inner class is nearly indistinguishable from a normal top-level class (the only real difference is that its name in the bytecode has a $ character separating the names, that is, its name in the bytecode ends up being something like "org.example.Outer$Inner").


Well, you still have to define them inside of another class then and have to find one that makes sense. They also have an reference to the outer class which might not be desirable.

That being said, it maybe makes it slightly better, but I hope you agree that this is still very much a supoptimal solution and probably comes from a time where searching filenames was the best way to navigate code in the lack of modern IDEs.


No you don't, only for public types , types internal to the package can stay on the same file.


Rust yes, Zig is hardly any better than Modula-2.


So that means Modula-2 is better than Go? :)


We have no indication that memory safety is even involved in this bug. For all we know, it could be a timing vulnerability that allows factoring key material, data being copied from the wrong object, or a protocol flow bug that let's the attacker bypass validation. You can create a vulnerability by adding || where you meant && in any programming language, even Rust without unsafe code enabled.


its hilarious how HNers on the "pro memory safety" side of the fence have this moronic attitude that memory disclosures are likely to happen in memory-safe languages. its simply false. you may be right in 1% of cases, but no more.


I want it to be never. Controversial opinion: as long as those who use "security" to oppress us have the upper hand, code written in "unsafe languages" will always leave a path to freedom from the authoritarian dystopia of corporations and governments who will seek to increase their control over our lives. We've already seen the battle start at DRM, jailbreaking/rooting, etc. IMHO the periodic but not-too-often occurrence of vulnerabilities like this, just like a nonzero amount of (cyber)crime, is a justifiable cost that we must continue to tolerate and pay for the sake of our freedom.


What are you on about? Using a library written in memory-safe language won't oppress you.


Software inhibiting user freedom (like drm) often gets broken using buffer overflows, string parsing mistakes, ... (as seen on many game consoles). Broken (drm) software allows for more user-freedom.

If this software was using Rust, it would be much harder to break them than is currently the case.

And tbh, I have to concur (somewhat). I have lost little-to-nothing due to software exploits, but have gained significantly. For example, reading epubs with my kindle or loading homebrew on game consoles.


The idiom for this is “biting your nose off to spite your face.”

You should do good things consistently, not bad things to offset worse things. If DRM is a serious problem in your life, put your money and time where your opinions are and avoid hardware and software products that enforce it rather than mandating insecurity for everyone else.


It's impossible to get AAA games (and most AA games) without some form of DRM (with some notable exceptions), the same as high(er) budget media productions.

The Kindle was already 8 years old when I got it, isn't it better to re-use it with more current software? The same with router hardware that gets exploited to flash OpenWRT.

It's very hard to get a modern Smartphone (with acceptable cameras, battery life, performance and software availability) with manufacturer-intended root access.

While I agree that people should adopt Rust (and other approaches) for their security porperties, it's not hard to see how it may lead to exploits getting rarer and to more categories of devices & content that can't be reasonably used in a "free" way, even if not intended by the manufacturer. Thus making it much harder to have control over the devices you own (without becoming some kind of luddite).


I empathize with this position: there are a lot of people out there who are discovering that they don't really own the content they've paid for, because they're tied to electronic ecosystems they have no control over.

That being said: I don't think the world is necessarily a worse place if (1) everybody's devices are more secure, and (2) consumers are a whole are disincentivized from buying into ecosystems that fundamentally don't respect their rights. At the risk of sounding like the luddite you mentioned: maybe we really could use a little separation between technology and literally every other domain of our lives.


I see the same attitude from people insisting on using an "open" Android-based phone that Google uses to spy on them mercilessly, while eschewing Apple because they are so "authoritarian" and sneaky. The logic often stated is that Apple can't be trusted because they're considering the option of maybe starting an ad business.


Those who use them will, and have already been doing so even without memory-safe languages, one notable example being that company named after a fruit; but for a long time, there was always a way out.

The metaphor I like to use is "giving them better nooses to put around our necks."

...and I suppose you could argue that neither do guns kill people...?


The C folks used to call Algol linage of programming languages programming with straightjacket, while we called them cowboy programming.

Unfortunately for computer security this has been a wild west.


Someone else using it might though.


Security flaws can be exploited by governments and corporations, too.


And much more effectively, and at much larger scale.

The same security flaw that lets you jailbreak a phone could also allow a hostile entity to say "we don't need you to unlock your phone/laptop, we'll just seize it and break into it using known security vulnerabilities".

Buy devices that you control. Don't try to make other people's devices less secure because you want to break into your own.


I was gonna say - who's more likely to benefit from memory corruption bugs: the general populace, or the trillion-dollar military-intelligence complex?


You first. What browser and OS are you posting from?


Not OS or browser but my SSH servers use Teleport and my HTTPS servers use Traefik or Caddy.

Caddy, Traefik, and Teleport are written in Golang and not using OpenSSL. It’s a start.


I adore Go, but it seems to be impacted too: https://groups.google.com/g/golang-announce/c/dRtDK7WS78g


Filippo Valsorda(FiloSottile), who is the maintainer of Go's cryptography libraries said on Go's slack channel that the patch is unrelated.

See screenshot of the slack conversation: https://paste.pics/f5622033ae711b36e0bbcda393a67866


This is almost certainly a different bug. I don't believe Go's standard library uses OpenSSL.


You’re right re: implementation —- I’m drawing a conclusion solely from the coordinated disclosure that it’s a similar crypto/TLS issue.

If the Go issues were distinct I’d imagine they’d choose a different day to disclose/release?


> If the Go issues were distinct I’d imagine they’d choose a different day to disclose/release?

I think it's just a funny coincidence. That's going based on what I know about the OpenSSL one; I don't know anything about the Go one. We'll find out!



Indeed, it doesn’t sound like a coincidence.


>You first. What browser and OS are you posting from?

The ones that use Rust - Firefox and Windows


Neither Firefox nor Windows are written wholly, or even substantially, in rust. I thought “we” were rejecting programs written in unsafe languages?


True, but outside the kernel Windows has enough infrastructure running in .NET code.

Additionally even if C++ is unsafe, it is still better than plain old C, which since Vista has been the migration path from kernel code. Nowadays there are even template libraries that can be used on kernel and drivers like WIL.

Finally the Microsoft Security Guidelines are:

1 - use managed languages if one can afford it

2 - use Rust

3 - use C++, alongside SAL and Core Guidelines checkers


Firefox uses NSS for TLS, no?


If your server is in Erlang, it's probably dodged this (Erlang dodged heart bleed because it only uses ssl libs for cryptographic primitives)


This isn't a problem with unsafe languages. This is a problem with primarily OpenSSL, but also the entire structure around SSL, which does too much, which means the potential attack surface is too high.

Wasn't there a fork the last time they fucked up badly with a security issue anyway?


Until we finally get liability widespread across the industry and not only in domains where human lives are at risk, just like in every other industry.

When the pocket money gets affected all companies will care about security.

Things are thankfully moving into that direction, US security bill already calls out that one needs to think about delivering software written in C and C++. Only a matter of time until one needs some kind of clearance to deliver software to government agencies with unsafe languages.


Seven.


including this one?


All languages are unsafe because programming is inherently unsafe.

Otherwise I look forward to the day when we stop driving unsafe cars, whatever that means. Accidents can always happen.


Indeed lets abolish seat belts!

Needless gizmos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: