Hacker Newsnew | past | comments | ask | show | jobs | submit | jdougan's commentslogin

In Eclipse Phase:

> The acronym TITAN stands for Total Information Tactical Awareness Network. These were a group of highly advanced, self-improving seed Artificial Intelligences (AIs) that are responsible for the catastrophic event known as The Fall.

Someone else has already made the mandatory Torment Nexus quote.


Emacs is a (virtual) Lisp Machine

The latter 2 usages are pretty much the same. They just have different virtual instruction sets.

> The latter 2 usages are pretty much the same.

Only if you ignore most of reality, sure.


> Only if you ignore most of reality, sure.

No, not really.

Hypervisor VM: emulates a virtual computer with virtual, emulated hardware, but a simulated version of the same CPU as the host, allowing 1 OS to run under another.

E.g. Xen, VMware, KVM, bhyve

Bytecode VM: emulates a partial virtual environment, with an emulated CPU and some form of conversion or translation from virtual environment to the underlying real API and real OS, allowing programs to execute on radically different OSes on different CPUs.

E.g. JVM, MoarVM, Parrot VM, Dis in Inferno

Emulator VM: emulates a virtual computer with virtual, emulated hardware, including a virtual CPU.

E.g. MESS, RetroVM, ZesaruX

Container: emulates an OS from userland down, but shares the same OS kernel across instances.

E.g. Docker, LXC, LXD, Incus, FreeBSD jails, Solaris Zones


So you can't tell the difference between a hypervisor and a bytecode interpreter.

OK.


Wow, just what I wanted for Christmas. Back in the day I found VisiOn's approach fascinating since almost everything at the time was either more tightly integrated or completely unintegrated.

Maybe someone could do the the old Reason software bus based system next? As detailed in Jan 1984 Byte magazine. Lord only knows if there are surviving copies anywhere in the world.


One of the nicer features of D is that arrays are value types with no degrade to pointer.


Smalltalk romantics


do you have showdead turned on?


I think the original was supposed to be to the music from "The Caissons Go Rolling Along".


Were the roasts correct?


A couple of the points made were quite useful, but the tone was mean!


I'm not sure I'd want to limit the selection of languages that much. Depending on the project and how much language risk you can manage (as opposed to security risk), there also is D, Odin, and Zig. And probably a bunch more I'm unfamiliar with.


Most of what gives high-reliability or high-assurance code that label is the process rather than the language. In colloquial terms it rigorously disallows sloppy code, which devs will happily write in any language given the chance.

As much as C is probably the least safe systems language, and probably my last choice these days if I had to choose one, more high-assurance code has probably been written in C than any other language. Ada SPARK may have more but it would be a close contest. This is because the high-assurance label is an artifact of process rather than the language.

Another interesting distinction is that many formalisms only care about what I would call eventual correctness. That is, the code is guaranteed to produce the correct result eventually but producing that result may not be produced for an unbounded period of time. Many real systems are considered “not correct” if the result cannot be reliably delivered within some bounded period of time. This is a bit different than the classic “realtime systems” concept but captures a similar idea. This is what makes GCs such a challenge for high-reliability systems. It is difficult to model their behavior in the context of the larger system for which you need to build something resembling a rigorous model.

That said, some high-assurance systems are written in GC languages if latency is not a meaningful correctness criterion for that system.

If I was writing a high-reliability system today, I’d probably pick C++20, Zig, and Rust in that order albeit somewhat in the abstract. Depending on what you are actually building the relative strengths of the languages will vary quite a bit. I will add that my knowledge of Zig is more limited than the other two but I do track it relatively closely.


I don’t understand how you can get the same kind of reliability with C than with Spark - process or not, a formal proof is a formal proof. That’s much harder to get with C.


Why is it harder?


> Most of what gives high-reliability or high-assurance code that label is the process rather than the language.

This is what I've heard too. I have a friend who works in aerospace programming, and the type of C he writes is very different. No dynamic memory, no pointer math, constant CRC checks, and other things. Plus the tooling he has access to also assists with hitting realtime deadlines, JTAG debugging, and other things.


> no pointer math

How does that work out? Does he never uses arrays and strings?


Like in most languages, with indexes.


But in C that's just syntax sugar for pointer math.


It still makes it possible to have bounds checking. (And it is also not true anymore for C2Y.)


Except it is more obvious what is the intention, it is about clarity to the reader.


My point was indeed, that if you don't use pointer arithmetic in C, that means that you don't use arrays. I mean when you declare arrays of a fixed size, you can also declare an equivalent number of primitive variables instead, but I would find that inconvenient. Hence the question.


If I remember correctly, he meant that only array accesses are used, because their length can be checked (as all arrays have a static length due to no dynamic memory).


Indeed, this is what many people do. But even if you use dynamic memory, if you replace pointer arithmetic by array indexing, you get bounds checking. And in C this also works for arrays of run-time length.


But can't I put any pointer arithmetic in array brackets, so it wouldn't limit anything?


Whatever index you compute can be checked against a bound.


2[a*b] What bound?


This does not even compile. For array indexing,

array[expression]

if "array" has a bound whatever expression evaluates to can be checked against the bound of array. If "array" is not a bounded array but a pointer or an unbounded array, then this does not work, but my point is that it is easy to avoid such code.


Zig is not memory safe at all


Nor is any other dime-a-dozen LLVM frontend with basically the same bullshit syntax. What is your point?


swift and rust?


What about them?


they are memory safe


Proof?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: