I'm not sure they're all that simpler, the basic plumbing probably hasn't changed much, it's just that modern fabrication tech means you can hide all the complexity inside
Doesn't matter. In Scotland, we have variously a former BBC news reader who was married to a politician and a Gaelic child actor (often on BBC made programmes) whose father was MP for the Outer Hebrides and whose uncle was a BBC news reporter. You could argue neither of those were political roles. It comes over as very cliquey and nepotistic to me. There are millions of people in NZ, many of whom can't get into these circles, and folk at the top of politics and media hang around with each other. Then the public wonders why they come over as out of touch.
Ardern's ?uncle was also a big noise in the Mormon church. Used to run the Pacific region, where it has major influence in countries like Samoa and Tonga. She's well connected.
I lived on Dana St in Oakland at the Berkeley border, we continually found lost and confused people - the street continued down our block and stopped, reappeared after a jog a half block away, numbers got smaller as it went south, over the border numbers got smaller as the went north and the street disappears for 3 blocks as it crosses Telegraph Ave (literally where the telegraph was installed in a straight line) diagonally - 4-5 pieces with numbers going in opposite directions.
Even wore our portion was originally in Berkeley, the border was moved 100 years or so ago so that someone could open the closest bar to the campus when Berkeley had more stringent liquor licensing
If you can print small enough with this technology I'm pretty sure you can make transistors - sort of 1980 era transistors, not very dense, but if you are printing bulk materials you can build in 3D rather than 2D, make interesting numbers of transistors, cpus in everything!
And anyone implementing numerical algorithms is thankful for the tremendous amount of thought put into the fp spec. The complexity is worth it and makes the code much safer.
There was actually no "thought" being put into the IEEE spec as such. It was merely a codification of the design of the Intel FPU (only one of many, very different implementations of FP units pre-standardisation). There was thought put into that implementation, but the "standard" is merely a codification of that design.
It has many many warts, and many design choices were made given the constraints of hardware of that time, not by considerations in terms of a standard.
imo they were wrong almost as much as they were right. -0.0, the plethora of NaNs, and having separate Inf and NaN all make the life of people writing algorithms a lot more annoying for very little benefit.
I think I would find it very challenging but fun. Certainly more fun than writing a date/time library (way more inconsistent cases; daylight savings time horrors; leap seconds; date jumps when moving from Julian to Gregorian) or a file system (also fun, I think, but thoroughly testing it scares me of)
When people see that binary-float-64 causes 0.1 + 0.2 != 0.3, the immediate instinct is to reach for decimal arithmetic. And then they claim that you must use decimal arithmetic for financial calculations. I would rate these statements as half-true at best. Yes, 0.1 + 0.2 = 0.3 using decimal floating-point or fixed-point arithmetic, and yes, it's bad accounting practice when summing a bunch of items and getting a total that differs from the true answer.
But decimal floats fall short in subtle ways. Here is the simplest example - sales tax. In Ontario it's 13%. If you buy two items for $0.98 each, the tax on each is $0.1274. There is no legal, interoperable mechanism to charge the customer a fractional number of cents, so you just can't do that. If you are in charge of producing an invoice, you have to decide where to perform the rounding(s). You can round the tax on each item, which is $0.13 each, so the total is ($0.98 + $0.13) × 2 = $2.22. Or you can add up all the pre-tax items ($1.98) and calculate the tax ($0.2548) and round that ($0.25), which brings the total to $0.98×2 + $0.25 = $2.21, a different amount. Not only do you have to decide where to perform rounding(s), you also have to keep track of how many extra decimal places you need. Massachusetts's sales tax is 6.25%, so that's two more decimal places. If you have discounts like "25% off", now you have another phenomenon that can introduce extra decimal places.
If you do any kind of interest calculation, you will necessarily have decimal places exploding. The simplest example is to take $100 at 10% annual interest compounded annually, which will give you $110, $121, $133.1, $146.41, $161.051, $177.1561, etc., and you will need to round eventually. Or another example is, 10% annual interest, but computed daily (so 10%/365 per day) and added to the account at the end of the month - not only is 10%/365 inexact in decimal arithmetic, but also many decimal places will be generated in the tiny interest calculations per day.
If you do anything that philosophically uses "real numbers", then decimal FP has zero advantages compared to binary FP. If you use pow(), exp(), cos(), sin(), etc. for engineering calculations, continuous interest, physics modeling, describing objects in 3D scene, etc., there will necessarily be all sorts of rational, irrational, and transcendental numbers flying around, and they will have to be approximated in one way or another.
When writing financial software, one almost always reaches for a decimal library in that language and ends up using that instead of the language's built-in floats. (Sometimes you can use ints, but you can't once you need to do things like described above.)
Overall, yes, results need to be rounded, but it's pretty much financial software 101 not to use floats.
I'm a long time verilog user (30+ years, a dozen or so tapeouts), even written a couple of compilers so I'm intimate with the gory details of event scheduling.
Used to be in the early days that some people depended on how the original verilog interpreter ordered events, it was a silly thing (models would only run on one simulator, cause of lots of angst).
'<=' assignment fixed a lot of these problems, using it correctly means that you can model synchronous logic without caring about event ordering (at the cost of an extra copy and an extra event which can be mostly optimised away by a compiler).
In combination 'always @(*)' and '=', and assign give you reliable combinatorial logic.
In real world logic a lot of event ordering is non deterministic - one signal can appear before/after another depending on temperature all in all it's best not to design depending it if you possibly can, do it right and you don't care about event ordering, let your combinatorial circuits waggle around as their inputs change and catch the result in flops synchronously.
IMHO Verilog's main problems are that it: a) mixes flops and wires in a confusing way, and b) if you stay away from the synthesisable subset lets you do things that do depend on event ordering that can get you into trouble (but you need that sometimes to build test benches)
BTW my really big peeve about modern verilog is that it never picked up {/} as synonyms for begin/end - my experiments (20 years ago) showed that it was an easy extension, the minor syntactic ambiguities were trivally fixable
68451 or a custom SUN-like (SRAM, kind of like a PDP11) MMU, there was a guy who went around Silicon Valley in the mid 80s designing SUN-like MMUs for companies, they were all different, and some were broken (couldn't protect user space from kernel space).
68000s however had a problem: they couldn't return correctly from a page (MMU) fault (68010s fixed that) for a pre-VM (pre BSD or SVR2) UNIX world - however you could get around this with a few smarts
yeah, that's rather a pain though and it effectively leaves one 68k frozen while the other services the page fault - it means you can't run another user process while the page is being read in (because it too might cause a page fault)
Of course while you're doing the next version you should knock out a tiny tapeout version, it should easily fit in a single cell (maybe 2 if you want to push the 256 byte sram in as well)
reply