The 1960s memory models of current programming languages were as much a problem as the OSes since since the benefits of persistent programming require slightly different thinking.
This reminds me of running Unix on the Cray X-MP. It worked, but was very slow since calling a function stalled these superpipelined machines designed to run FORTRAN really fast.
Interactively, it felt like they were slower than even a sun 3. I know this isn't scientific, but I never metered it.
Seymour Cray's CDC machines were fast computers, but IMHO those Cray Corp machines were essentially high performance vector units that happened also to be able to run a bit of control code. I guess it wasn't sexy enough to build a coprocessor, or maybe by then there was enough choice in mainframes that interconnect (which was far from standardized in those days) would have been a barrier.
We weren’t building them to be interactive dedicated mostly-idle personal computers with snappy response times, and that’s not why they were purchased.
I’m not disagreeing with you! I don’t really understand why people wanted it — it wasn’t what the machine was designed for.
I mean, I do understand (but disagree with) the specific use case at NASA where I encountered it: the CFD would run all night (or longer) and when done the files went to some Irises for visualization, which also took forever. So running the same source code (basically rsync iirc) sounds easier in theory. But at what cost? I think it would have been better to just write the transfer code to run under Cray OS.
And indeed, once unicos was on the machine people did want to run it interactively.
This reminds me of running Unix on the Cray X-MP. It worked, but was very slow since calling a function stalled these superpipelined machines designed to run FORTRAN really fast.