Hacker Newsnew | past | comments | ask | show | jobs | submit | musicale's commentslogin

I'm running macOS because it runs desktop applications, and the macOS desktop itself, fairly well.

But I wouldn't mind if macOS (a BSD-flavored Unix) added support for linuxlator, and jails, pledge, and various other BSD-flavored things.


Here is my TL;DR and interpretation:

1. Knuth laments the lack of technical ("internal") history of computing, which traces the evolution of technology and ideas, and should be of great interest and benefit to practitioners.

2. Historians typically focus on their domains of expertise - social history, culture, economics, politics, personalities, etc. - and tend to write non-technical ("external") history of computing.

3. The people who have the relevant technical expertise - practioners, researchers, and scholars within the computing field - are qualified (in terms of technical understanding at least) to write this technical history, but have basically zero economic incentive to do so. There is no reward for industry practitioners to write the technical history of computing, and there is little to no reward for computing researchers or scholars either. And of course if one is (or becomes) an expert in computing, there is no economic incentive to become (or remain) a historian.

4. Nonetheless, there is in fact a small (and hopefully growing) group of scholars who seem to be interested in investigating the technical history of computing (and according to the author "holistic" history which includes multiple aspects.)


I tend to agree with Knuth - technical history is extremely valuable to both practitioners and researchers in computing, and there isn't enough of it.

While it is understandable that computing practitioners and researchers want to look forward to the next "new" thing rather than backward to "old" things, ignoring computing history means that we are often reinventing the wheel, repeating old mistakes, etc., all while lacking an understanding of how and why things are the way they are today. And perhaps missing out on a great deal of fun and intellectual engagement as well.

Fortunately is some activity in terms of writing up and analyzing the technical history of computing, and I certainly appreciate the work of the CHM, journals like Annals of the History of Computing, the work of retrocomputing hobbyists, and the work of the scholars mentioned in the article. But (as the article notes) there are few economic and career incentives - in history or in computing - to produce this important work.

The article validates Knuth with these statements:

> For different reasons, outlined below, neither group has shown much interest in supporting work of the kind favored by Knuth. That is why it has rarely been written.

> Most of this new work is aimed primarily at historians, philosophers, or science studies specialists rather than computer scientists

> Work of the particular kind preferred by Knuth will flourish only if his colleagues in computer science are willing to produce, reward, or commission it.

The second part of this last sentence isn't wrong, but sidesteps the first point. One might similarly criticize history departments for failing to reward or commission technological literacy.


I also agree with Knuth and for me it has been extremely valuable to know the history of various technologies, and especially knowing the reasons why the optimum solutions have been replaced from time to time and the causal connections between various discoveries.

I see expressed frequently opinions that old scientific and technical publications are obsolete, but in my opinion this is very naive.

The optimum technology or algorithm for solving a certain problem changes when improvements are done in some different domains. However the range of kinds of solutions for a given problem is usually finite, so when the optimum solution changes in time it may necessarily change to a kind of solution that has already been used in the past.

Because of this, it is very frequent to see claims about the discovery of "new" things, where the so-called "new" things were well known and widely used some decades ago, or even much earlier.

The worst is not the time wasted with the rediscovery of old things, but the fact that the rediscoveries are usually incomplete, without also rediscovering the finer points about which are their most efficient variants and which are their limitations, which may make them non-applicable in certain contexts.

Knowing a detailed technical and scientific history avoids such cases.


*practitioners - too late to fix typo

It seems like Sony and Nintendo had more high-quality exclusives (though mostly timed exclusives for Sony), which are system sellers, while most Xbox games were also available on Windows.

But the Xbox hardware is good, franchises like Halo / Gears / Forza etc. have always been good, and Xbox Game Pass is great.


He actually understood games, but somehow Xbox always seemed mismanaged.

The 360 era was good and they were really trying. For the last 10 years I don’t even know what Xbox stands for at this point. Like gamepass is a neat SaaS and the consoles are meh and PC gaming went it’s own way a long time ago and is at a healthy place

360 also benefited from PS3 issues (unusual architecture that was hard to program effectively, high price ). GamePass is great, and the Xbox hardware is good, but PS4 and PS5 had stronger timed exclusives while Switch had an appealing combination of Nintendo first-party exclusives (several of them rising from the ashes of the Wii U), lower cost, and handheld/hybrid operation.

At least Phil Spencer knew something about games.

Bounded strings turned out to be a fairly good idea as well.

As I and others noted below, it is included in Apple's clang version, which is what you get when you install the command line tools for Xcode. Try something like:

    clang -g -Xclang -fbounds-safety program.c
Bounds check failures result in traps; in lldb you get a message like:

    stop reason = Bounds check failed: Dereferencing above bounds

I want an OS distro where all C code is compiled this way.

OpenBSD maybe? or a fork of CheriBSD?

macOS clang has supported -fbounds-safety for a while, but I"m not sure how extensively it is used.


Maybe this:

https://fil-c.org/pizlix

>Pizlix is LFS (Linux From Scratch) 12.2 with some added components, where userland is compiled with Fil-C. This means you get the most memory safe Linux-like OS currently available.

The author, @pizlonator, is active on HN.


I'm aware of Pizlix - it's a good project/idea that needs to go mainstream; as you mention, memory safety is currently limited to userland (still a huge improvement over traditional unsafe userland.)

Note also that it uses fil-c rather than clang with -fbounds-safety. I believe fil-c requires fewer code changes than -fbounds-safety.


https://github.com/hsaliak/filc-bazel-template i created this recently to make it super easy to get started with fil-c projects. If you find it daunting to get started with the setup in the core distribution and want a 3-4 step approach to building a fil-c enabled binary, then try this.

hot dang that's neato. shame about the name, though.

You need to annotate your program with indications of what variable tracks the size of the allocation. So, sure, but first work on the packages in the distro.

Note that corresponding checks for C++ library containers can be enabled without modifying the source. Google measured some very small overhead (< 0.5% IIRC) so they turned it on in production. But I'd expect an OS distro to be mostly C.

[1] https://libcxx.llvm.org/Hardening.html


Get gentoo, add this to CFLAGS and start fixing everything that breaks. Become a hero.

It is called Solaris, and has this enabled since 2015 on SPARC.

https://docs.oracle.com/en/operating-systems/solaris/oracle-...


Might as well not even talk about anything with the Oracular kiss of death.

Isn’t Illumos and OpenIndiana doing the same?

I still remember someone at Sun commented they treated warnings as errors. This is how software should be developed.


The feature is only on SPARC, not x86. Oracle killed in-house SPARC development in 2017, and they abandoned OpenSPARC after they acquired Sun, so it's effectively a dead architecture. The software won't work without the hardware to run it on.

Fujsitsu also does SPARC, and contrary to HP-UX, people still do buy Solaris.

EDIT:

https://www.oracle.com/servers/sparc/

https://www.fujitsu.com/global/products/computing/servers/un...

Finally, it is up to Intel and AMD to come up with hardware memory tagging, so far they have messed up all attempts, with MPX being the last short lived one.


It's good info, and I wouldn't rush a migration off of SPARC systems if I was already using them, but slow death is still death. It was already worrying that workstations were killed off by Sun before the Oracle acquisition; it seems quite clear that no one has been serious about spreading adoption of the architecture for more than two decades now.

Even Fujitsu has been moving away from SPARC. What was the last SPARC Fujitsu designed?

What matters is that they are still selling them.

Kind of. Atos still sell GCOS/GECOS mainframes, but they are Xeon boxes running emulators. Same with Unisys and MCP (which was written in an ALGOL and had bounds checked IIRC).

Not everyone suffers from Oracle phobia.

Some of us actually do read licenses before using products.

Also the FAANG are hardly any better only because they spew cool marketing stuff like do no evil.


FAANG won’t send auditors to check whether your are in compliance with what license you paid for. Per core/socket licensing is one of the reasons POWER can do SMT/8.

>I want an OS distro where all C code is compiled this way.

You first have to modify "all C code". It's not just a set and forget compiler flag.


Indeed. I still want it.

Fedora and its kernels are built with GCC's _FORTIFY_SOURCE and I've seen modules crash for out of bounds reads.

_FORTIFY_SOURCE is way smaller in scope (as in, closes less vulnerabilities) than -fbounds-safety.

What are you hoping it will achieve?

The internet went down because cloudflare used a bad config... a config parsed by a rust app.

One of these days the witch hunt against C will go away.


A service going down is a million times better than being exploited by an attacker. If this is a witch hunt then C is an actual witch.

Why can it be exploited? I’ve configured my OS so my process is isolated to the resources it needs.

What language is your OS written in?

It’s written in C I’m glad you asked. Do you have any exploits in the Linux process encapsulation to share?

Surely your not suggesting that the Rust compiler never produces exploitable code?


I probably don’t have such an exploit, since you’re probably running something up to date. There have been many in the past. I doubt the last one to be fixed is the last one to exist.

If your attitude is that getting exploited doesn’t matter because your software is unprivileged, you need some part of your stack to be unexploitable. That’s a tall order if everything is C.

You can get exploitable code out of any compiler. But you’re far more likely to get it from real-world C than real-world Rust.


> you need some part of your stack to be unexploitable.

Kernel level process isolation is extremely robust.

> If your attitude is that getting exploited doesn’t matter because your software is unprivileged

It’s not that exploits doesn’t matter. It’s that process architecture is a stronger form of guarantee than anything provided by a language runtime.

I agree that the place where rust is most beneficial is for programs that must be privileged and that are likely to face attack - such as a web server.

But the idea that you can’t securely use a C program in your stack or that rust magically makes process isolation irrelevant is incorrect.


How can process architecture be a stronger guarantee than anything provided by a language runtime when it is enforced by software written in a language?

You have a process receiving untrusted, potentially malicious input from the outside. If there’s an exploit then an attacker can potentially take control of the process. Your process is isolated, that’s good. But it can still communicate with other parts of your system. It can make syscalls. Now you’re in the same situation where you have a program receiving untrusted, potentially malicious input from the outside, but now “the outside” is your subverted process, and “a program” is the kernel. The same factors that make your program difficult to secure from exploits if it’s written in C also apply to the kernel.

I’m not sure where those ideas as the end of your comment came from. I certainly didn’t say them.


> How can process architecture be a stronger guarantee than anything provided by a language runtime when it is enforced by software written in a language?

Please learn more about this topic. You don't understand OS security models.


The internet didn't go down and you're mischaracterizing it as a parsing issue when the list would've exceeded memory allocation limits. They didn't hardcode a fallback config for that case. What memory safety promise did Rust fail there exactly?

I think the point is memory bugs are only one (small) subset of bugs.

The conventional wisdom is ~70% of serious security bugs are memory safety issues.

https://www.cisa.gov/sites/default/files/2023-12/CSAC_TAC_Re...


Security bugs - and not bad security processes, are a small subset of bugs.

A panic in Rust is easier to diagnose and fix than some error or grabage data that was caused by an out of bounds access in some random place in the call stack

does any distro uses clang? I thought all linux kernels were compiled using gcc.

Chimera does, it also has a FreeBSD userland AFAIU.

https://chimera-linux.org/


hm this one is interesting. Thanks for sharing!

https://www.kernel.org/doc/html/latest/kbuild/llvm.html

> The Linux kernel has always traditionally been compiled with GNU toolchains such as GCC and binutils. Ongoing work has allowed for Clang and LLVM utilities to be used as viable substitutes. Distributions such as Android, ChromeOS, OpenMandriva, and Chimera Linux use Clang built kernels. Google’s and Meta’s datacenter fleets also run kernels built with Clang.


Not a Linux distro, but FreeBSD uses Clang.

And Android uses Clang for its Linux kernel.

-fbounds-safety is not yet available in upstream Clang though:

> NOTE: This is a design document and the feature is not available for users yet.


It's too bad that cities are so unlivable due to noise pollution (constant aircraft noise, road noise, loud rumbling that travels for miles, 24/7 emergency vehicle sirens, etc.)

I concur - research can include both scientific and engineering research.

I note MIT (like many universities) has a department of Electrical Engineering and Computer "Science".


It's interesting seeing the EECS and CS+CompENG programs splitting into two CompE and AI programs currently. This is happening in my department where we are standing up an AI major and we're all asking "Is the CS department the AI department now or what? Where do all the systems people go?"

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: