Note how similar the programming concepts are to CUDA (at an abstract level). Hillis also in the 80s published his MIT thesis as a book: The Connection Machine
An incredibly well written and fascinating read, just as relevant today for programming a GPU as it was for programming the ancient beast of a CM-2. It's about algorithms, graphs, map/reduce, and other techniques of parallelism pioneered at Thinking Machines.
For example, Guy Blelloch worked at TM, and pioneered prefix scans on these machines, now common techniques used on GPUs.
There's also been a lot of hum lately on HN about APL, much of Hillis' *Lisp ideas come from parallelizing array processing primitives ("zectors" and "zappings"), ideas that originating in APL as he acknowledged in the paper describing the language:
One should note that CM-1/2 (which is essentially an FPGA turned inside out which you can reconfigure for every program step) has radically different architecture than CM-5 (which is essentially the same as modern many-CPU distributed memory supercomputers).
Also of note is that * Lisp described by Hillis' paper (xectors and xappings with more or less hidfen mapping to hardware) is completely different from * Lisp that was actually sold by TMC, which handled embedding of the problem geometry into hardware, but otherwise was Paris assembler (ie. what you send through the phenomenally thick cable from frontend to CM to make stuff happen) bolted onto Common Lisp. IIRC the commercial *Lisp got somehow opensourced and you can run it (in emulation mode) on top of SBCL.
You're right, he talks in the video I linked above about how different the CM-1/2 architecture is to the CM-5, but how the ideas of "data parallelism" on "virtual processors" maps onto both designs.
Thanks for the info, I have seen variants in old pdfs around that have the !! parallelism construct instead of using the algebraic forms of alpha, beta, and dot. I find the latter form as described in the book The Connection Machine to be very elegant.
The *Lisp in the book The Connection Machine used a different sytax, there the operators α, β, and · where used to algebraically map and reduce lisp functions over parallel data structures as described in this paper by Hillis and Steele:
Real computers have blinken lights. In an age of dull beige boxes, that’s what truly set them apart, at least to a layman.
I’m half joking, but half not. Nvidia, Cray, etc need to put some blinken lights on these drab racks. Something with AI needs lights, and it looks sexy to Joe Public.
Like in classic sci-fi, a sentient AI machine would have columns of blinken lights, tended to by women with clipboards, lab coats, and high heels.
(Somewhat) funny story: My wife used to work at TMC and said that initally the blinken lights actually indicated the utilization of each CPU but programmers spent so much time entertaining themselves by writing code to do animations in the blinking light matrix that customers demanded ThinkingMachines do something about it. So they changed the lights to blink randomly.
When TM was at its peak, I heard comments from people in the DoD/intel community that the primary value of having a Thinking Machines computer was recruiting value. At least one company set one up in a very prominent place so that new developer recruits would see it and think the company was cutting edge. They rarely if ever actually developed on it because they found it just wasn't practical to write (or rewrite) code bases to take advantage of the parallel architecture. (and DARPA had paid for it, so it didnt actually cost the company that much).
There's something awe-inspiring of big halls full of identical racks fitted with identical machines, with thick, even bundles of colourful network cables straining against their strips. Each unit -- of row, rack, server -- anonymously, but with great power, quietly (underneath the roar of the fans) grinding away on some unknown, ephemeral subpart of a workload. From a distance, clam and regular, serene, even; but as you get closer, plenty of blinking lights, on harddrives and NICs, feverishly and with no apparent pattern giving a small hint of the fierce activity taking place inside the cool metal box.
I've watched blinken lights on my motherboard for about 2 seconds when it performed automatic overclocking. POWER, CPU, DRAM, CPU, DRAM, CPU, DRAM, TPU, BOOT. Quite entertaining.
I've seen some PC motherboards that contained integrated "POST card" in the form of bunch of LEDs or pair of 7segment displays. Sadly this seems to got replaced by few highlevel LEDs and bunch of meaningless ones, which includes stuff like backlighted PCB.
By the way many DEC Alpha boards had large amount of LEDs near the CPU (probably driven directly by the CPU) which shown state of PALcode (and thus blinked in entertaining way even when the system is up and running)
The purpose of those lights was to give a glimpse of the status of the machine - which cores were idling and which were not. The upper 16 LEDs of each board were doubled. I assume the extra ones were used for diagnostics.
If your computer lives in a datacenter, far from view, there is little purpose on them. If, however, it's a smaller unit that lives on a desk or in an office, being able to quickly tell its state is interesting. Good visualization is an art.
Only slightly related, but imagine, if you take a huge server rack- and you insert hidden behind the front - a glas jar with a silicon-brain, into which tons of glowing wires run.
I want to hear the maintenance calls for that one.
Best blinking light machine I ever saw was the Nanodata QM-1. More than a thousand LEDs. It had a program called 'tsq' which was short for 'Times Square' which would display a scrolling message.
It's interesting to see how some of what Connection Machines thought would happen in the future, has now come to pass, such as scientists renting computing capacity by the hour, with e.g. GPU rental on cloud computing.
Well... Each board had 32 LEDs (the top 16 were doubled, I don't know for what), each cube had 16 boards, 8 on one side, a couple in the middle without LEDs and doing communications, IIRC, and 8 more on the other side. Not sure it had LEDs on the back cubes.
I'm seriously thinking about building a cluster of ARM-based thingies and use LEDs controlled from each node to show usage of cores, NEON lanes (patching the Ne10 library) and so on. There are some octa-core big.LITTLE (I forget the new name) boards that would make the carrier boards simpler (only 4 per carrier needed, considering CPUs alone). The boards themselves would be simple, having only LEDs connected to the GPIO pins and power being fed to the nodes, which would be wired together using ethernet.
Another, way cooler but waaaaay dumber (because it'd be a shitload of work for me), would be to design a board around an ethernet switch and a bunch of Octavo SiPs (or, maybe, some Pi-like CPU Soc with PoP RAM on top, provided it has ethernet on board to reduce chip count). Having everything on a single board would avoid PHY transceivers and reduce board complexity, but it still would be a ton of work for someone who hasn't designed a PCB since the dawn of the SMD era. Also, the Octavo parts are single core and we'd need 32 of them per board to light up 32 LEDs in a meaningful way. I'd rather restart my hardware engineer career with something less megalomaniac.
The final, laziest, approach would be to get a cluster board and 7 SOPINE modules from the fine people at Pine64 and wire their GPIO lines to a couple LED matrix modules. With 28 cores per board, the use of 28 of the 32 lights would be simple to figure out, but we'd need something for the other 4 LEDs (2 could be from the on-board ethernet upstream port, but 2 still remain. Also, since the SOPINES stand perpendicular to the cluster board, spacing would be very tight.
He did more than that, but painting a wall, or other manual labor that doesn't require concentration, gives time for deep thinking. I'm sure he didn't mind it.
"""
We were arguing about what the name of the company should be when Richard walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss, what's my assignment?" The assembled group of not-quite-graduated MIT students was astounded.
After a hurried private discussion ("I don't know, you hired him..."), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.
"That sounds like a bunch of baloney," he said. "Give me something real to do."
So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router.
"""
Haven't read his book but I've listened to him talk on podcasts. I think he makes some excellent criticisms on the horrible state of dietary science. He can be a bit digmatic himself and sometimes makes unhelpful statements but he also has some great points. I'd probably credit him with the chain of events that resulted in me deciding to go low carb which has turned out to be a great decision so far for me.
Cool and awesome does not pay. The top companies may be using cool stuff to accomplish things, but the things being done are ultimately pedestrian. Google sells ads, Amazon runs an online marketplace, Apple makes personal communicators and Facebook is a place for chatting with friends. None of them are doing anything like "searching for the origins of the universe."
Cool and awesome pays just fine. Ask NVIDIA. Many of the ideas pioneered in the Connection Machine have been validated by time. Hillis was only a couple of decades or so ahead of his time.
I actually laughed hard at your last sentence. But you have made a very valid point.
One has to be pragmatic while establishing a company, especially in tech. The pitch shouldn't be something like - "Inventing a time travelling machine that that will reverse the entropy of the universe while self replicating von Neumann contructors"
Here's a great video describing the architecture of the CM-5
https://youtu.be/Ua-swPZTeX4
Note how similar the programming concepts are to CUDA (at an abstract level). Hillis also in the 80s published his MIT thesis as a book: The Connection Machine
https://www.amazon.com/Connection-Machine-Press-Artificial-I...
An incredibly well written and fascinating read, just as relevant today for programming a GPU as it was for programming the ancient beast of a CM-2. It's about algorithms, graphs, map/reduce, and other techniques of parallelism pioneered at Thinking Machines.
For example, Guy Blelloch worked at TM, and pioneered prefix scans on these machines, now common techniques used on GPUs.
https://www.youtube.com/watch?v=_5sM-4ODXaA
http://uenics.evansville.edu/~mr56/ece757/DataParallelAlgori...
There's also been a lot of hum lately on HN about APL, much of Hillis' *Lisp ideas come from parallelizing array processing primitives ("zectors" and "zappings"), ideas that originating in APL as he acknowledged in the paper describing the language:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108...
What's old is new... again.