Every day, more and more people are understanding just how little control they have over the processors that run in their Intel and AMD chips [1][2]. Maybe soon there will be enough people who care to stop giving Intel and AMD money and fund projects like these ones.
OpenPOWER seems to be quite open and free. Normally it's quite expensive though, although their main products seem to be aimed at servers.
Thought saying that, you always pay. Pay hard cash for a computer, or pay your privacy and freedom.
It's a balancing act. I don't believe I need absolute 100% control of all things going on in my computer; if I did, I wouldn't be able to utilize it anyways. It's pretty telling that Intel ME has been largely invisible to even savvy users until recently.
Having competition is probably part of what's needed to ensure that there's some control. The problem is that none of the competition is currently approaching it from a user freedom standpoint. After all, most companies making processors are publicly traded and even if they weren't, need to sell a huge volume of them to break even and have decent per-unit costs. So trying to target what is unfortunately a very tiny niche of users isn't going to go well.
And for the truly paranoid, it's impossible to prove that the processor wasn't tampered with, that there isn't a ring below yours. You can really only ever get to the point of being able to reasonably doubt the existence of it, which won't satisfy everyone.
I think Intel is digging its own grave by taking the worst stances on issues of user freedom. Eventually, if Intel ME's power over the machine continues to increase, something bad is going to happen as a result of it, and it's going to erode the already fragile trust people have in them. The ME exploit was part 1 of that.
But what about companies like AMD, or the hundreds of ARM manufacturers that take a more passive route of not caring about user freedom? There's no imaginable consequence for that. And there's no real benefit of reversing course, either. I can't see a future where open computing will be anything more than an extreme niche with sub-par options.
One of the better scenarios I can think of is that computing as we know it today becomes a niche itself and a lot of the users do value privacy and security. This seems plausible the better that Chromebooks and phones and tablets get; the average user won't need a computer tower or a laptop with 16 GB of RAM. But I imagine it won't be good news for the prices we currently enjoy :)
or the hundreds of ARM manufacturers that take a more passive route of not caring about user freedom?
I think ARM SoCs are even worse. Try finding a full (or just partial) datasheet/technical documentation for anything like the ones used in recent smartphones and tablets, for example. They are going to want a NDA. The only ones publicly available were leaked. They also differ significantly between each other and often even within the same model line, so it's nothing like a PC where a lot of things still remain relatively standard.
ARM also has TrustZone, their DRM technology of which there is almost no public documentation.
TrustZone is not DRM nor a management engine like Intel's ME. It is just a reference design in order to implement some sort of sandbox and security zone in the SoC.
ARM doesn't mandate how TrustZone is used, it just provides the technology. If SoC designers use it to force DRM into their products, it's not really ARM's fault.
Also, just to make it clear, it is not ARM's fault if ARM based SoCs lack proper documentation.
It's at least partially ARM's fault, as ARM doesn't publish documentation for their cores. SoC vendors are just as bad, if not worse, but ARM doesn't get off scott-free.
Actually, the Arm server space is solving this problem through standardization. These are the SBSA and SBBR specs. All systems from different vendors will look the same way to the OS and boot thr same way as x86 servers. You will be able to run the same binary OS distribution on any system. Gone are the days of a BSP for every SoC. And these server specs will drive how non-server systems will look, since there will be an inherent cost to do something different (both in hw, sw and firmware).
> I think ARM SoCs are even worse. Try finding a full (or just partial) datasheet/technical documentation for anything like the ones used in recent smartphones and tablets, for example. They are going to want a NDA.
If you're thinking of someone like MediaTek, Allwinner or the like: they aren't really in the merchant chip business. They pretty much design a chip for a single large project and supply it with support chips, (usually crummy) drivers, and even a linux or android port as a "complete unit". Often there isn't really a data sheet as such; they send some engineers along to help with integration.
This, by the way, was the original meaning of "SoC" when people were starting to think of common hardware/software codesign in the 1990s: a purpose-built chip for a specific design built out of a combination of standard IP and custom circuitry, but conventionally fabbed instead of being FPGA. Think of it as integration of some of the board design onto a single die. The IP market people thought at the time would eventually develop didn't really, though.
Intel and AMD are still in the good‡ old days of data sheets, second sourcing (pretty much dead but it's still in those companies' DNA) etc.
"I don't believe I need absolute 100% control of all things going on in my computer"
Neither do I and probably also operating systems developers, but having control doesn't imply having constantly to fiddle with all internal hardware bits of a machine with no remaining time to do actual things (if you meant this I'm totally with you on this one). To me having control means preventing others to do things behind our backs, sort of a switch we use only once to turn off that crap, and keep under watch just to prevent others to use it.
Public information about ME and similar subsystems should help us to do that.
> I don't believe I need absolute 100% control of all things going on in my computer; if I did, I wouldn't be able to utilize it anyways.
It's possible to have 100% control of your computer without being able to being able to utilize it. At a higher level, take Linux as an example: I can certainly utilize it, but I still have 100% control over it. The difference here is that I have the option to change things as I see fit, even if I don't know how it works or even that it exists. Most of the time I leave it alone, but there are times where doing things like swapping out the window manager or driver can be helpful.
I'm hoping for RISC-V to come through, because it has a chance to commodify even further than OpenPOWER could (due to exercising somewhat less control at the point of control for the trademark, and having a design which will suit more levels of designers, possibly allowing designers to enter new markets more readily). That said, the current POWER designs perform very well, so they might be a good interim solution for me.
RISC-V may do that tomorrow, but Arm servers are doing that today. You how a wide range of solutions with different CPUs in a variety of formats and compute power, from uCPE and IOT gw boxes all the way to high-density OCP designs with terabytes of RAM and a hundred cores...all compatible at the software and firmware level.
Let me know when an affordable arm board with ecc support comes along, wholesale or “call for price” not allowed. Because as it stands it might as well be sparc or mips in how accessible the hardware is.
Also, calling is easy. https://www.phoenicselectronics.com will sell you a Gigabyte MT30-GS2 (Cavium ThunderX 32-core) 1U system for around $2k. If you want to provide your own ATX case - much less.
Hardware is accessible, but there are a lot of big players buying things en masse, and the market isn’t oversaturated. This means long lead times and some price fluctuations. But things have been getting much better with every year (out of Arm servers’s whole 3 years of public availability). Definitely more accessible than anything OpenPower (which sadly will set you back the price of a small used Kia). The recent launch of third-gen Arm servers like the Qualcomm Centriq will lower the price for older kit.
Incidentally, still the only Cortex-A72 design out there. And for what you get, the A8040 and A7040-based solutions have a ridiculous low max power consumption of like 30-20 Watts.
Your understanding is incorrect [1]. Intel ME is another processor running the MINIX operating system. It is used on all Intel chips made since 2008 and can not be disabled. It even contains a JVM for running Java. Inside that operating system contains modules (or "apps") that are running, some even when your computer is "turned off". For example, one application is AMT which can allow you to remotely control your computer, another is an app that controls the turning on of the fan.
There are generally two versions of the firmware which can be used. A light version normally used for personal computers and a heavy version normally used for servers.
The light version doesn’t contain the AMT module, the heavy one does.
If you are using the firmware that includes AMT, it can be disabled.
The problem is we don’t actually know what other modules are running, who can run modules and which ones use network access.
This is true for the latest iterations, where MINIX 3 is embedded in the Platform Controller Hub (PCH). Earlier iterations were running on ThreadX RTOS embedded in the Northbridge.
To my understanding some systems, but not all, implement wakeup-on-LAN for the ME, even, when the system is shut down but is still connected to a power source (which is true for most modern machines).
Running a multibillion dollar silicon foundry costs money.
You're always welcome to design your own chips, then barter for them with pelts.
The result of this project thus far is totally underwhelming and I'd think whoever was donating to it not only didn't need that computer and could wait an indefinite amount of time for one, but is likely throwing their money away and may never get a product.
This project isn’t about designing CPUs, so “running a foundry” is irrelevant. They’re working to build a laptop that’ll run an off-the-shelf POWER-based CPU, which is complex but still massively simpler than designing and building the CPUs themselves.
Correction: I meant PowerPC, not POWER. The latter is not likely to fit comfortably in a laptop. If you want a POWER-based workstation, check out the Talos II from Raptor Computing.
I'm not sure if this was posted here to make fun of how little progress they've made in 3 months, to commend them how much progress they've made, or to just introduce the project.
I am confused, impressed that somebody still wants to make a powerpc laptop, but also underwhelmed by what essentially amounts to a generic description of a computer.
Well, the diagram itself wasn't that interesting, but sounds like they did a lot of work on the BoM and such, which is could easily take that long even for folks who have done it before.
The block diagram seems to be missing one or two things, too. How does the SoC query the battery charge, for example? Or is this accomplished as a push-based API from the System PSU (the only power-related component connected directly to the SoC)?
"Why PowerPC?:
The PowerPC architecture design is newer than the other successful CPU architectures.
..."
This if like saying a screensaver of a fire is better than actual fire in your fireplace since its newer. Ignoring the fact it doesn't come close to half the features of a real fire (except for maybe the risk of burning your house down).
But as a long term Apple user from before the the x86 era I can't deny there sentiment of going with the underdog :)
i'd guess you can emulate a PPC (for non-memory intensive tasks) on a modern x86 faster and with less power than running a PPC natively. I might be completely off. and with parallelization and containers fad, which benefit a lot from memory and context changes, they would have more luck funding servers for VM farms than laptops.
Huh? Emulation is always slower than anything running natively on comparable hardware, since there's an extra layer of abstraction. Do you mean something else here?
nope. mean exactly it. You forget the fact that x86 is leaps ahead on clock speed of any PPC.
if you are at 1/8th of the bare metal performance, ona top of the line average x86 cpu from today, you still have more than the best PPC money can buy.
Even for memory and IO intensive tasks, where they are supposed to shine because of the obscenely huge in-cpu caches (when compared to x86) if you remember that the PPC state of the art IO standard is from 2004 and it could barely handle dual gigabit NICs.
I might be completely off and not knowing newer PPC offerings, but I doubt there are any since the early 2000's, when i last looked into all this.
tl;dr there is no "comparable" ppc in terms of clock speed to x86.
As expected, my laptop blows what was arguably the "fastest" PowerPC computer out of the water: http://browser.geekbench.com/geekbench2/compare/1989293/2644.... However, I'm not sure this difference is enough for emulation to work at native speed. Taking the Wii (which used PowerPC) as an example, it wasn't until recently that most computers could handle emulating it without a significant frameskip. The G5 is much more powerful than 729 MHz Broadway that the Wii had.
These guys are working with an Amiga vendor who has done a series of designs around the Applied Micro PowerPC 440 series SoCs (systems-on-a-chip). The block diagram clearly indicates an SoC. (IBM POWER, the fast PPC chips, are not SoCs!)
I infer that this is another PPC 440 design. Those chips top out at 1.2 GHz with just a couple cores, and they are a legacy product as far as Applied Micro is concerned -- AM is now focused on their ARM products.
At best, this project is a very pokey laptop from a near-dead Amiga OEM. At worst, it's a scam.
I'm guessing POWER has surpassed what a laptop from 2003 can do since the iBook G4 was released. What if you want a better display? More expansion? A GPU? Bluetooth 4? I'd be surprised if an iBook G4 can run a modern Linux stack.
It can run Ubuntu-MATE 16.04, OpenBSD, MorphOS and more.
It's barely usable for web browsing. Even with upgraded hardware (1.25GB RAM and a CF card instead of hard drive) Firefox on Ubuntu and TenFourFox on Mac OS X Tiger are really damn slow. OWB (WebKit) on MorphOS is a bit better, but still not what I'd use for everyday work.
Yeah, I was really hoping Linux could resurrect my old PowerBook G4. It had great specs for its day and was still in great shape.
After trying a whole load of options, Lubuntu 14.04 was the best but there wasn’t anything which provided a modern web experience.
I briefly thought of trying to make a Raspberry Pi casemod with it before realising how hard it would be - seemed a shame to get rid of a perfectly good screen and enclosure just because the processor is no longer supported.
iBook G4 on Mac OS X Tiger is slow. It's borderline unusable compared to any Intel Mac, even if you stick to "stock" (i.e. use Safari for web browsing, Mail for mail, etc.). I wouldn't recommend it to anyone.
Unfortunately, the first question that popped into my head was not "awesome, a processor that won't spy on me" but "will it be x86 compatible?".
I know, it's heresy--you should only need open-source software, and as long as you can get a compiler running, open source software should work. However, the vast majority of users simply won't care, not if they have to give up their favorite software. "I can't use Photoshop? But I need it!"
If you want an open processor to take off, it'll need some kind of x86 compatibility.
Wine+QEMU seems to be a reasonable way of executing many Windows programs on non-x86-64, non-Windows platforms. Alternatively, you could install a real Windows in a full-system emulator and run on that.
As someone who ran x86 software (primarily Windows at the time) via software emulation (virtualpc) on production PowerPC hardware (Apple PowerMac) I have to disagree.
If you don't need x86 compatibility, PPC is fine, and for open source stuff it's probably reasonably well supported in major distros.
But pretending you can have anywhere close to reasonable performance of x86 binaries on even the newest PPC chips is hopeful at best and realistically a fanciful notion.
You are comparing a system simulator to an instruction set simulator. Those are very different things.
When you run an application on QEMU on Linux, only the application code is simulated (by just-in-time compiling every basic block when it is first executed, then always just executing that compiled native code). System calls go to the Linux kernel on the host system, which runs as usual. Hardware is not virtualized. In contrast, Virtual PC (if I understand correctly) simulates the hardware of an entire PC, including graphics hardware, and runs all of Windows on top of that. That's a lot more expensive in terms of processing power.
That said, sure, you will probably lose some performance with QEMU. If the simulated code uses lots of fancy vector instructions that are not provided by the host hardware, you'll even lose a lot.
I don't think QEMU in instruction set emulation mode emulates the MMU. Does it? The host system has a perfectly fine MMU. QEMU translates assembly code to assembly code. That is all it does, as far as I know.
Correct, currently usermode QEMU does not emulate the mmu, but just has a flat mapping of guestaddr = hostaddr + constant_base_addr. There has been some thought of adding mmu emulation to usermode, though, as the simple fixed mapping has problems (eg if the guest wants to map at some address where there's already something in the host address space, or for guest archs where the highbits of guest addresses are important like ia64).
I wasn't as concerned about Windows-only programs as I was about non-open x86 programs in general. While there's definitely more closed Windows programs, there are Linux builds of closed-source tools which are only available for x86.
I'd say it isn't the market that has spoken. Software developers have spoken for the market. Local programs are usually what people want (remember the original iPhone and the backlash against not having applications). Today most business, program managers, and devs prefer to either have the compatibility, fast development time, or full control that a web based service provides. Many times it is a negative for the customers.
I'm thinking that the next architecture to break into PC/mobile will probably come through Chromebooks, but will nonetheless need a well-performing dynamic binary translator for a popular architecture or two (better than QEMU, which is apparently okay-but-not-great).
If PowerPC is such a great architecture, then why did Apple transition the Mac platform to x86 about 12 years ago? Has PowerPC gotten much better since then?
That had basically nothing to do with the architecture. The key reason was likely the diverging requirements between laptop (Apple) and server (IBM) chips.
The rumor back then was because the G5 wouldn't fit in a laptop format, and the G5 became far too hot. So Apple went to Otellini (Intel) for a deal. They swapped once before architecture within macOS: from m68k to PPC. I don't know about it enough to know if that's true or false, perhaps someone else can verify?
I think the main driver is there’s common ground between some of the engineers who are PPC fans and some that are Amiga/Morph OS fans. Amiga runs on m68k and PPC so this means they can get some more modern hardware for their platform.
If these incredibly useful ports are not present, they cannot be used. If they are present, they can be used, but you don't have to do so.
This isn't a single laptop design: this is a prototype for building lots of different machines. If you want a NUC-style tiny brick computer, you can build one from this. If you want a RAID server, you can build a small one directly or a medium-sized one by adding a PCIe controller. It's never going to appeal to an ultrabook buyer, since it would be really difficult to engineer all the other components for ultra light weight, but you could replace the vast majority of non-gaming cheap laptops with this.
Oh, and integrated touchpads are usually connected with PS/2, SMbus or I2C.
The PS/2 port is probably for the wide range of trackpad/keyboard components that still use that still use that protocol internally, not a physical external port.
Lots of devices do this - even high end network gear that have a "USB" port externally wire it to a USB to Serial converter internally, where Serial is the actual protocol in use.
OpenPOWER seems to be quite open and free. Normally it's quite expensive though, although their main products seem to be aimed at servers.
Thought saying that, you always pay. Pay hard cash for a computer, or pay your privacy and freedom.
1. https://libreboot.org/faq.html#intelme
2. https://libreboot.org/faq.html#amd