Hacker Newsnew | past | comments | ask | show | jobs | submit | mirashii's commentslogin

> But from my own experience, UB just means "consult your compiler to see what it does here because this question is beyond our pay grade."

People are taught it’s very bad because otherwise they do exactly this, which is the problem. What does your compiler do here may change from invocation to invocation, due to seemingly unrelated flags, small perturbations in unrelated code, or many other things. This approach encourages accepting UB in your program. Code that invokes UB is incorrect, full stop.


I understand, but you have to see how you would be considered one of the Standards-Purists that I was talking about, right? If Microsoft makes a guarantee in their documentation about some behavior of UB C code, and this guarantee is dated to about 14 years ago, and I see many credible people on the internet confirming that this behavior does happen and still happens, and these comments are scattered throughout those past 14 years, I think it's safe to say I can rely on that behavior, as long as I'm okay with a little vendor lock-in.

> If Microsoft makes a guarantee in their documentation about some behavior of UB C code

But do they? Where?

More likely, you mean that a particular compiler may say "while the standard says this is UB, it is not UB in this compiler". That's something wholly different, because you're no longer invoking UB.


Yes, that is still undefined behavior. Behavior being defined or not is a standards-level distinction, not a compiler one.

> Code that invokes UB is incorrect, full stop.

That's not true at all, who taught you that? Think of it like this, signed integer over/underflow is UB. All addition operations over ints are potentially invoking UB.

   int add (int a, int b) { return a + b; }
So this is incorrect code by that metric, that's clearly absurd.

Compilers explicitly provide you the means to disable optimizations in a granular way over undefined behavior precisely because a lot of useful behavior is undefined, but compilation units are sometimes too complex to reason about how the compiler will mangle it. -fno-strict-aliasing doesn't suddenly make pointer aliasing defined behavior.

We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place. Do you think it just a quirky oversight that UB triggers a warning at most? The entire point of compilers having free reign over UB was so they could implement platform-specific optimizations in its place. UB isn't arbitrary.


> -fno-strict-aliasing doesn't suddenly make pointer aliasing defined behavior.

No, it just protects you from a valid but unexpected optimization to your incorrect code. It's even spelled out clearly in the docs: https://www.gnu.org/software/c-intro-and-ref/manual/html_nod...

"Code that misbehaves when optimized following these rules is, by definition, incorrect C code."

> We have compiler behavior for incorrect code, and it's refusing to compile the code in the first place

This isn't and will never be true in C because whether code is correct can be a runtime property. That add function defined above isn't incorrect on its own, but when combined with code that at runtime calls it with values that overflows, is incorrect.


Those are the docs for a compiler, not the language standard.

> All addition operations over ints are potentially invoking UB.

Potentially invoking and invoking are not the same.


I’m not sure there’s anything clever that resolved the issues, they just settled on slow execution times by accepting a dynamic dispatch on generics.

Not according to this post:

> Go generics combines concepts from "monomorphisation" (stenciling) and "boxing" (dynamic dispatch) and is implemented using GCshape stenciling and dictionaries. This allows Go to have fast compile times and smaller binaries while having generics.

https://deepsource.com/blog/go-1-18-generics-implementation


Yes, note lack of mention of speed :)

This comment from a dupe thread is worth considering: https://news.ycombinator.com/item?id=46137352

I agree, and I personally wouldn't call golang memory safe for that reason. Thomas's semi-definition includes the word "vulnerability", which narrows the scope so much that golang fits under the bar, since the common data race that causes memory corruption hasn't been shown to be exploitable without being contrived.

My personal definition of memory safety for a language like golang would specify that you can't cause this sort of memory corruption without an explicit call to unsafe, but there's no real definition to fall back on.


The same thing happens any time a message board confronts a professional term of art. The same thing happened with "zero trust", where you'd have long wooly debates about what was meant by "trust", but really the term just meant not having a conventional perimeter network architecture. Sorry, but the term as used in industry, by the ISRG, and in government guidance refers specifically to vulnerabilities.


The check frequency isn't the problem, it's the latency between release and update. If a package was released 5 minutes before dependabot runs and you still update to it, your lower frequency hasn't really done anything.


What are the chances of that, though? The same could happen if you wait X amount of days for the version to "mature" as well. A security issue could be found five minutes after you update.

EDIT: Github supports this scenario too (as mentioned in the article):

https://github.blog/changelog/2025-07-01-dependabot-supports...

https://docs.github.com/en/code-security/dependabot/working-...


> What are the chances of that, though?

The whole premise of the article is that they’re substantially lower, because some time for the ecosystem of dependency scanners and users to detect and report is better than none.



> I think this is the wrong way to promote rust

This is entirely the wrong lens. This is someone who wants to use Rust for a particular purpose, not some sort of publicity stunt.

> I know nobody that programms or even thinks about rust. I’m from the embedded world a there c is still king.

Now’s a good time to look outside of your bubble instead of pretending that your bubble is the world.

> as long as the real money is made in c it is not ready

Arguably, the real money is made in JavaScript and Python for the last decade. Embedded roles generally have fewer postings with lower pay than webdev. Until C catches back up, is it also not ready?


> This is entirely the wrong lens.

Telling people they need to take their ball and go home if they're incapable or unable to maintain an entire compiler back-end seems like a, shall we say, 'interesting' lens for a major distro such as Debian.

Just to parse some files?


Just to parse some files there are already tools and libraries for, for added security, without specifying a threat model


These are not officially supported platforms. It doesn't seem that unreasonable for Debian to not want to be restricted to "code that can be run on CPUs from the 1990s" in 2025

It's cool that you can run modern Debian on an Amiga or whatever, but it's not particularly important that that be the case.


> Dependencies will be added, for very basic system utilities, on (parts of) a software ecosystem which is still a "moving target", not standardized,

This is the status quo and always has been. gcc has plenty of extensions that are not part of a language standard that are used in core tools. Perl has never had a standard and is used all over the place.


If you're designing an OS distribution, you would have your base system written adhering strictly to language standards and without relying on flakey extensions (not that GCC C extensions are flakey, I'm guessing most/all of them are stable since the 1990s), and minimizing reliance on additional tools.

For example, IIUC, you can build a perl interpreter using a C compiler and GNU Make. And if you can't - GCC is quite bootstrappable; see here for the x86 / x86_64 procedure:

https://stackoverflow.com/a/65708958/1593077

and you can get into that on other platforms anywhere along the bootstrapping chain. And then you can again easily build perl; see:

https://codereflections.com/2023/12/24/bootstrapping-perl-wi...


It feels like perhaps you’ve conflated the issue in this thread, which is about using Rust in apt, which is much, much later in the distribution bringup than this bootstrapping, and using Rust in something like the Linux kernel, which is more relevant to those types of bootstrapping discussions you posted.

apt is so late in the process that these bootstrapping discussions aren’t quite so relevant. My point was that at the same layer of the OS, there are many, many components that don't meet the same criteria posted, including perl.


The procedure to produce GCC you cited was 13 steps. Many of the tools were made after distributions required GCC. And a similar procedure could produce a Rust compiler.


No, it’s a valid question, and one that I’m sure will get some answers in the coming days and weeks as the discussion on adding this requirement continues, but in some sense, it’s beside the point.

The cost of supporting this old hardware for businesses or hobbyists isn’t free. The parties that feel strongly that new software continue to be released supporting a particular platform have options here, ranging from getting support for those architectures in LLVM and Rust, pushing GCC frontends for rust forward, maintaining their own fork of apt, etc.


This whole it used to be different thing is looking back with rose tinted glasses. It’s always been the case that project maintainers were able to make choices that the community didn’t necessarily agree with, corporate backed contributors or not, and it’s still a possibility to fork and try to prove out that the other stance is better.

Nobody is being forced out of the community, you can fork and not adopt the changes if you want. Thats the real point of free software, that you have the freedom to make that choice. The whole point of free software was never that the direction of the software should be free from corporate control in some way, the maintainers of a project have always had the authority to make decisions about their own project, whether individual or corporate or a mix.


The point of freedom in software is certainly that I can create my own fork. And individual projects a maintainer can certainly do what he wants. But it is still worrying if in community projects such as Debian when decisions that come with a cost to some part of the community are pushed through without full consensus. It would be certainly not the first time. systemd was similar and for similar reasons (commercial interests by some key stakeholders), and I would argue that Debian did suffer a lot from how badly this was handled. I do not think the community ever got as healthy and vibrant as it was before this. So it t would be sad if this continues.


...it is still worrying if in community projects such as Debian when decisions that come with a cost to some part of the community are pushed through without full consensus.

What are some concrete cases you can point to where a decision was made with full consensus? Literally everyone agreed? All the users?

I'm not sure many projects have ever been run that way. I'm sure we've all heard of the Benevolent Dictator for Life (BDfL). I'm sure Linus has made an executive decision once in a while.


> pushed through without full consensus

Requiring full consensus for decisions is a great way to make no decisions.


> are pushed through without full consensus

You describe it that way, but that's not how the world in general works in practice. You do things based on majority.


No, this is not how you do things in a functioning community. You do things based on societal contracts that also protect the interests of minorities.


I cannot fathom using the rest of my Saturday attempting to break down the level of spin you’re trying to play at here.


> systemd was similar and for similar reasons (commercial interests by some key stakeholders)

False claims don't really make the claims about the evils of Rust more believable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: