The F-16 back when I started my career at General Dynamics was slated to use implementations of Mil Std 1750A, with the Z8002 as an interim processor for the fighter's next generation computers (F-16C/D in the early/mid 80s). I wrote some of the Jovial runtime that was intended to be used for both processors. No idea what happened to that code since I left in 1984. The 1750A had a math coprocessor which made things a little easier than the Z8002.
I have so much moe for the various post-Z80 chips Zilog made that people think failed in the marketplace but are still around and perfectly viable for applications that don't need 32-bit
The latest round of STM32G0 is about $1 in quoted price before you start haggling. TI is launching a $0.40 (!) ARM micro soon. A modern toolchain, an ALU that doesn't suck, and a ton of ecosystem support are genuine improvements over z80. It's just the new default in the same way that z80 was a default (though less so, I grant).
And I say this as someone who still knows z80 opcodes from high school programming TI calculators. I love the thing but man is 32-bit nice.
I saw a tweet once that went something like, "Mankind strayed from god when we invented the IC", and some days I feel that. Economies of scale crush everything, including our nerdy fondness for particularly elegant ISAs.
You're right there. As transistors got cheaper, we've got into the habit of using them en-mass to prop up horrible architectures with weird (but effective) tricks like enormous look-ahead, stupendous pipelines, and gigantic caches.
I recall with considerable fondness those ISAs that could be understood in ultimate detail by a simple human - the monolithic supercomputers like the Cray 1.
I dunno. There was that strange time in the 1980s where progress in microprocessors hung in the air, where Apple couldn't really find a sequel to the Apple ][, where the TRS-80 Model 4 wasn't much better than the Model 3 which wasn't better than the model 1, where Commodore came out with a new machine every year but only a few of them made any traction.
Coding assembly for the 8088 it was painfully obvious that instructions were competing on the bus with data, which is why the string instructions were so important
Today people would scoff at that sort of thing because a tight loop can sit in the I cache and be just as efficient as a microcoded string instruction.
I traded my Coco 3 (which unlike the PC had a real multitasking OS) for an 80286 machine and that was a massive jump in performance because the 80286 was starting to get those complex features that would start the "Moore's Law" period where computers got notably better on a year by year basis. The awful truth of memory latency really forces you do those "horrible" things if you want to get near the performance that is possible.
Today I am an AVR8 fan because it has separate buses for instructions and data and gets awesome performance for something very simple that doesn't use all the tricks that later processors use. It's the last 8-bit processor so it stands head and shoulders above the rest in terms of clean design and it's got a mainframe-sized register file. In assembly language you can frequently keep most of the variables you use all the time in registers, dedicate a few registers to the interrupt handler so you don't need to swap registers, etc.
As for the Cray and the IBM 3090's vector units it was nice that those things had vector instructions that weren't bound to a particular implementation length unlike the SIMD instructions that Intel has so often fumbled with that require you to rewrite your code every two year if you want to keep up, aren't available across the line (so people other than national labs and Apple don't use them) and are arguably a waste of power and die area at this point.
I find it intriguing that memory is not just A bottleneck these days, but that memory is SO much slower than CPU. A big part of the performance difference must be the move off the main chip, but it just seems that when cache can be so quick, memory should be faster than it is.
>TI is launching a $0.40 (!) ARM micro soon. [...] It's just the new default in the same way that z80 was a default.
And yet many cheap electronic devices will continue to be built and sold with 4-bit microcontrollers because 32 bit is overkill for those use cases and $0.40 is too expensive when 4-bit dies cost $0.05 or even $0.01, and $0.35 in savings is huge in high enough volume.
You won't see any new products developed in the west using such microcontrollers but they're still alive and kicking and there's still developers for them in Asia.
At some point you get to a weird place where the logic transistors are so small you get some gate count for free just fitting around the pad drivers and pin protection diodes. We're getting pretty rapidly to the ~10k gate count there where a stripped down classic RISC makes sense.
Out of curiosity what’s the cheapest microcontroller die you’ve come across? It would be interesting to know what you get at the lowest possible price point.
The cheapest mask ROM microcontrollers you can't buy from distributors but only directly from the manufacturer and in large volumes they can be a couple of cents since you buy the dies and package it yourself directly on the PCB under the classic black blob of resin you see in watches/calculators and other cheap dollar store widgets.
Something like the EM MICROELECTRONIC EM6680 or the Seiko-Epson S1C63004 but a bit more spartan. Also parts from OKI/Lapis/ROHM semi (they do the dies for Casio watches and calculators) and I think some Chinese vendors who bought out operations form Toshiba/Sharp/NEC for similar old school parts.
100% agree that you can easily pay an engineer's salary with the economy of scale savings from using spartan parts. I was simply marveling at how cheap the fully-packaged, high-schooler-can-use-it, definitely-works-for-your-application default ARM has become :)
$0.40 may well be a (very) expensive part in some contexts. I've seen companies develop ASICs to shave off four legs from a chip. Once the volume is high enough such small bits really add up.
also written as moé. A Japanese slang term (ironically, first employed by otaku) used to refer to the fetish for or sexual attraction to idealized people, usually a fictional perfect young girl.
Since then, moé has come to be used as a general term for a hobby, mania or fetish (non-sexual or otherwise). This is contrasted with otaku, which would be taking the specific hobby, mania or fetish to harmful levels.
In late 80s you'd still have a significant cost difference between 8 (or 16) bit CPUs and more powerful ones like the 68K. 10k or 100k transistors mattered a lot.
These days most smaller cores are integrated with peripherals, which can easily take the bulk of the silicon area (+cost). Then it's effectively free to tack on a small 32 bit core like Cortex-Mx (or even small 64b core). Why bother with 8/16 bit one?
I suspect RISC-V may eat a lot of that market in years ahead.
For components where CPU core is a good portion of the silicon (microcontrollers, RFID tags, toys, etc etc), that may be different. That's why many old 8/16 bit architectures are still around in one form or another.
There's a reason you can still buy Z80s in DIP (and more modern) packages on Mouser, along with the better peripheral chips like the Zilog SCC. All in stock, all for reasonable (e.g. not Rochester "we know you have to have this and no one else does) prices.
Thanks for mentioning that! I had an Amiga in the early nineties and consider myself pretty knowledgeable when it comes to Commodore history, but wasn't aware of this machine either. The article doesn't mention why the C900 was cancelled, but I guess Commodore was afraid to go up against IBM with a purely business-oriented machine? So they did what many other big companies have done since: bought up a startup to get the Amiga - which was successful for a while, but then they failed to invest into developing it further...
I actually took the Captain Zilog self-study course (probably in 1981 or 1982). They mailed you the first installment and you answered the questions. Then you mailed back the answers to get graded and receive the next installment. I think the completion prize was a Captain Zilog t-shirt.
The thing that I remember was that the Z8000 had some pretty complex addressing modes.
There was an earlier, and to my way of thinking, much better idea at Zilog than the Z8000. That was the Z280 (later christened the Z800) it was a 16 bit machine akin to the 8086 but with upward compatibility with the Z80 and a much nicer architecture. I believe that Godbout made and S-100 card with this processor at one time. I wanted one so badly and yet the 8088 and the IBM-PC steamrolled the industry because "big blue" and all that. Sigh.
Ah, memory lane, I love you. The S100 was a great architecture, so incredibly versatile, every component was designed around a standard so solid that it lived for 20 years and likely there are still systems in industrial control in service today (though those would be pretty hard to keep running).
Is anybody on HN aware of S100 systems used in production today?
Aside from the systems we use directly in maintaining other S-100 boxes? Yes!
There are many Dahlgren engraving systems running with S-100 control crates. The boards are mostly Solid State Music whitelabels, except for the actual tool driving boards. They don't break often, but we do sometimes get requests to fix them.
Other S-100 boxes still exist, mostly being used by small old companies who are, frankly, in the "old owner's kids are't going to take this over" death spiral. Whatever they're doing with S-100 still works for them, so they keep running it.
We've got customers running PDP-8s and PDP-11s controlling old CNC machines, so S-100 isn't the oldest thing we still service. I also know of a pile of 286s helping make some super cutting edge silicon...
Thank you for confirming my hunch. Besides being clever from a technical perspective those systems were also designed in ways that make your average kitchen appliance look flimsy. You could probably drop one from a first floor, end up with a damaged pavement, a fine and and working system ;)
Some of them! Some S-100 stuff is definitely cheese grade construction. We end up repairing a fair number of Altair 8800s of various models for hobbyists and collectors, and they are...not well-designed systems. It's pretty amazing how well S-100 progressed from the initial MITS design, given how slapdash it was.
I used some pretty beefy boxes for industrial systems in the 80's and what struck me about them - and of the PCs of that era - is that they were all built to last for decades. Modern stuff is built to last for a couple of years at best in most cases, with some rare exceptions. Flimsy cases, plastic instead of steel. Unserviceable, non-standard and proprietary instead of standardized and repairable. Miniaturized as much as possible instead of open frame. And finally, undocumented versus documented right down to the component level and the software source code.
From a longevity perspective that stuff absolutely rocks, much like I'd much rather have a car from the 90's than one built today. The 90's one will outlive me, the one built today will need at least one replacement.
One story I heard was about Compaq and how their early PCs were over-engineered.
The floppy drives of early Compaq PCs were rated/tested for a really high number of insertions/ejections. You'd have to be doing nothing BUT inserting/ejecting floppies 24/7 to even get close to the rated limit.
Some beancounter at Compaq realizes they're overpaying. They lower the rated limit of insertions/ejections by a couple orders of magnitude and let the savings flow through to the profit line with the customer none the wiser. Didn't help against Dell and the other cheaper PC manufacturers though.
Yeah, add on top of that you usually got full schematics, and logic tables if there were programmable logic devices! That's how we're able to continue offering service on these systems.
There's a pretty good hobbyist community around S-100, a fair number of folks are designing hobbyist-level new boards and such. A lot of that is centered around the following site:
s100computers.com
Some of their designs are...weird...and often "that's not a bug, we like it like that" is what you'll get for pointing out actual design issues...but! it's still cool to see people hacking on it.
All generations of the MSX home computer were based on the Z80. Until the last one, the MSX Turbo-R, when ASCII Corporation designed the R800, a CPU derived from the Z800.
Very impressed, how obvious looking high level design flaws of Z8000. And these are not technical flaws, but sure C-level decision flaws.
Only one question decided it's fate - why SMALL startup decided to make CHEAP design?
- It was obvious, it was impossible to compete with Intel on large cheap market. But obviously, Intel would not compete on niches, where startup without legacy could make something significantly better!
Exxon as a foundation, is not good, because their money was not money of Faggin, even more than money of investors are not money of Intel execs.
So, second drawback, of Zilog's situation, that Faggin was far more limited in maneuver then Intel's tops. So why he even more limited himself with cheap design?
I see only possible explanation, Faggin just was not brave enough, to propose Exxon tops something more brave, like 68k.
Or he really was, but they chosen simpler and more traditional design for that time.
Finally, I think, Faggin was tech genius, but looks like, this time some political things overweight to obviously flawed decision.
Does anyone have the history for why number of pins was such an important consideration (not just for Zilog, Intel too)? Was it really that much more expensive to make a 64 pin package than a 40 pin one?
There are a number of things that overlap here: the number of layers on circuit boards was still quite limited if you wanted the board to be affordable (2 layer boards were (back then...) 5x as cheap as 4 layer boards and 4 layers boards were 5x as cheap as 6 layer ones). There is the cost of the package itself and there is the cost of all of the infra around packaging and testing the die, the surface area of the chip die goes up with the square of the sides but the area available for interconnects only goes up with a factor of 4 unless you want to stagger your connecting pad. But that would complicate wire bonding.
Chip packaging has come a very long way from the days when a 40 pin DIP was considered a large chip. Kens (righto.com) has a series of articles about classic chips that show what high density designs looked like in those days and you can clearly see how the limits of packaging play into the designs of the dies themselves.
A 64 pin package that would not occupy a very large amount of board area (this is well before SMD became commonplace) would use all kinds of tricks such as staggered legs (sometimes up to three rows) and really tiny traces to be able to keep the costs down.
This is a fascinating bit of tech history and the fact that the packaging kept pace with Moore's law is maybe not quite as impressive as what was happening on the inside of the package but it was a major achievement and an enabler in and off itself.
From memory the reason given was test machines topped out at 40 pins. Anything bigger was much more expensive to test. IC Testers 40 years ago would apply inputs sequences to devices and check that the outputs were correct. Works for simple logic but already was a problem for processors.
The 8086/8088 multiplexed the address and data busses to get the pin count to 40. Which wasn't bad when using DRAMs since they're also multiplexed. Again to save cost.
Advantage of the 8088 was the 8 bit data bus. So you only needed 8 DRAM chips. Which was a big cost savings. And extra 16k of DRAM could be hundreds of dollars.
Packaging had been a limiting factor for a long time, e.g. the Intel 8008 has the weird multiplexed bus it has, and requires so much external logic, because Intel was largely "just" a memory chip company at the time and couldn't package bigger than 18 pin DIPs.
Prototype carriers were likely significantly more expensive for >40 pin DIPs, just due to the lower demand. The ceramic packages with a die well and glued or brazed lid are available as an off-the-shelf item for low run, prototype, or special chips, and that's what a lot of chips back then started off in, including the Z8000.
As other comments suggest, escape routing was also a secondary issue. Not a problem for "Texas cockroach" 64-pin DIPs, but PGAs and such.
I cant find it now, but I remember in one of
Asianometry videos a history of Japanese manufacturer traveling to Texas and discovering he can offer American clients his lead frames at 1/10 domestic cost and still make a killing.
Zilog could get a bigger market share.
Still remember those times programming in Z80 machine code. It was much more flexible and convenient versus 8080.
You can find Zilog in embedded systems, they are not dead at all.
> This was even though Intel’s 8086 was, by many measures, a design that was markedly inferior to the Z8000.
I don't see this as true at all. The 8086 segment registers are fine-grained, so you can use the single 16 bit value in a single register as a "pointer" to memory as long as you're OK with your blocks being allocated in 16 byte chunks and don't try to do array indexing on the raw value. This doesn't match the C memory model well, but it's very usable and for years DOS software exploited this trick to get clean addressing across the full 1M memory space of the PC with near-zero overhead[1].
The Z8000 had two-register addresses[2] and you needed to carry those two full words around everywhere. It's equivalent to writing DOS software with the "Huge" memory model pervasively (something pretty much no performance-sensitive software did, historically), because that's all the CPU could support. It's a big, big loss.
And all you get back for your trouble is a bigger register set, which looks great in a ISA document but doesn't really make up for the terrible memory access design. Zilog spent their transistors on the wrong things.
[1] Not zero, because there are only two usable segment registers to hold these pointers, but then the architecture had only 4 true GPRs anyway.
[2] IIRC (and seemingly confirmed by a quick wikipedia check) not even linear ones! The "segment offset" wasn't packed next to the low address bits. So you can't even do natural math on arrays bigger 64kb.
FTA: “One cheaper version would fit into a 40-pin package. This version would have its memory address sizes limited to 16-bits and so would only be able to use 64 KB of memory. A more expensive 48-pin version would have access to a 23-bit or 8 MB address space.
To support this approach, the architecture would be based on segmented addresses.”
“The 6507 (typically "sixty-five-oh-seven" or "six-five-oh-seven") is an 8-bit microprocessor from MOS Technology, Inc. It is a version of their 40-pin 6502 packaged in a 28-pin DIP, making it cheaper to package and integrate in systems. The reduction in pin count is achieved by reducing the address bus from 16 bits to 13 (limiting the available memory range from 64 KB to 8 KB) and removing a number of other pins used only for certain applications.”
The instruction set of the 6507 can address 64kB of memory, but it lacks connections to the outside world to actually use its full address range (it might even be possible for a hardware hacker to open up a 6507 and add those extra pins, depending on how much the 6507 internals differ from that of the 6502)
I believe the Z8001 and Z8002 are different designs (can't find a die shot of the Z8001 but Z8002 die only has 40 pads).
Zilog could have gone down the 6507 style route, I guess, but the Z8002 would have been more expensive to produce.
So use of the word 'support' is really 'in support of a cheaper 16 bit address / 40 pin version' where savings are made not only on packaging but on the die.
Sorry, I wasn’t clear enough. The point was (and is) that I don’t see why limiting the address space to 64kB in one of the chips would require using segmented memory, not that they could have reused the same design between the two.
I mentioned the 6507 as an example where a CPU has a larger address space in the instruction set than the hardware allows.
I could also have mentioned the 68000 (32-bit addresses in the instruction set, but its address bus only had 24 bits, and it initially didn’t support virtual memory, so a byte was wasted (or, in some cases, put to creative use) in every pointer.
The Z80 will always be my first love, in a TRS-80 Model 1 and soon after, the Vector Graphic Vector 3 which was a sturdy S-100 machine running CP/M. Robert Harp of Vector was an extremely competent designer faced with a challenge, a computer with CPU, CRT and keyboard in one unit. All-in-one machines were a challenge because noise from the video stream and HV flyback transformer pervaded the insides, so he restricted the 6 card S-100 cage width to a few inches and found the worst transients on signal lines and placed extra conditioning on the bus, mods reported back to Zilog to assist with future Z80/S-100 builds. The Vector 3 paper manual had 40 dense pages in the back with commented assembly of its entire BIOS. That was a nice touch and I taught myself assembler from it.
> One key implementation decision was that the Z8000 would not make use of microcode. Microcode would have broken down the Z8000’s instructions into a series of simpler instructions, hidden from the outside world, which the processor would execute. Instead, all instructions would be ‘hard-wired’ into the logic of the CPU, a more challenging approach for the designer. Shima would later discuss how much more difficult the Z8000 was to create when compared to the Z80, and how it tested the limits of the tools that he had available:
“… there are so many instructions in Z8000 it is impossible to store all of test vector for debugging in the memory of test bench anymore. Also, MOS process was getting denser and denser and also the size of the defects in masks was getting smaller. That is it was not so easy to find the fully functional die.”
This seems like a remarkable engineering decision to go entirely without microcode. Are there any comparable examples post 1979?
The NEC V33 (1988) was a hardwired (not microcoded) version of the 286. The decision not to use microcode might be related to the long lawsuit between Intel and NEC over microcode.
Also, as krylon points out, RISC chips generally don't use microcode.
According to the datasheet, which I've just looked at, it was not a '286 equivalent. Rather it added a paging mechanism whereby a 20 bit 'linear' address was mapped to a 24 bit 'physical' address. Using 1024 16k pages, so a bit like an internal LIM 4.0 scheme, but limited to 16M total rather than 24M.
It has a pair of new instructions, to enter and exit the extended mapping mode, and the mappings are only changeable when not in the extended mapping mode. So a form of memory protection.
Thanks Ken, that's pretty fascinating info. It makes me wonder if, with modern design software, a modern 64 bit CPU could be made without any microcode either.
Some friends of mine worked at a startup in the early 80's working on a z8000 based multi-user business system (Proteus for anyone around Vancouver then).