Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD Receives Approval for Acquisition of Xilinx (amd.com)
122 points by transpute on Feb 10, 2022 | hide | past | favorite | 66 comments


If I have one wish for AMD, I wish that they would make FPGAs (and Xilinx) a more open and diverse platform like the PC. Not that the PC is perfect (there's still closed-source firmware), but any improvement to the current state of FPGAs would be welcomed.


You've struck on the fundamental problem that the FPGA industry has been trying to solve for 30+ years - how to get an FPGA into the hands of every developer, like how GPUs have propagated to be essential tools.

Nobody has come up with a good answer yet. Developing for an FPGA still requires domain-specific knowledge, and because place & route (the "compile" for an FPGA) is a couple of intertwined NP-hard problems development cycles are necessarily long. Small designs might take an hour to compile, the largest designs deployed these days ~24H.

All this to say is that while they are neat, nobody has found the magic bullet use case that will make everyone want one enough to put up with the pain of developing for them (a la machine learning for GPUs). Simultaneously, nobody has found the magic bullet to make developing for them any easier, whether by reducing the knowledge required or improving the tooling.

Effort has been made in places like High-Level Synthesis (HLS, compiling C/C++ code down to an FPGA), open-source tooling, and (everyone's favorite) simulation, but they all still kinda suck compared to developing software, or even the ecosystem that exists around GPUs these days. You'll often hear FPGA people saying stuff like "just simulate your design during development, compiling to hardware is just a last step to check everything works" - but simulation still takes a long time (large designs can take hours) and tracking down a bug in waveforms is akin to Neo learning to see the Matrix.


If the FPGA industry thinks it has been trying to do this for decades, then it has been going about it seriously wrong! Keeping your systems as black boxes, with unit prices and development prices that make them prohibitive for anything but high margin device, effectively guarantees they'll never become popular consumer commodities.

With how open development works, the straightforward minimal investment is to publicly document some devices' bitstream formats and bootstrap the ecosystem by releasing some reliable Libre place and route software. The software doesn't even have to contain all of the trade secret heuristics, it just has to work with (./configure && make && make install) and be functionally adequate enough that individual developers can scratch their own itches.


Why not ship integrated FPGA in CPUs?

Being able to offload a repeated, complex MIMD computation to an FPGA treated like an instruction could be a huge win for scientific computing and any large, steady workload that is expensive enough for companies to invest in optimizing for the FPGA. If this became commonplace and relatively inexpensive then large corporations would likely fund improvements into compilers to make the developer experience simpler and faster.


There are such CPUs, and the uptake has been minimal, because as proven by GPGPUs not every developer is capable of actually use them.

Your example could be as easily done in a GPGPU.


I just wanted to note Intel tried that and it didn't work. See pjmlp reply.

I still think the idea is sound, the way to go about it needs a lot of rethinking.


You don't seem bullish on the prospects of using Vitis [0] to deploy a machine learning model to a Xilinx FPGA?

[0] https://www.xilinx.com/products/design-tools/vitis/vitis-pla...


Disclaimer: I work in this space (not at Xilinx), comments are strictly my own opinions and do not reflect any positions of my employer, etc.

Broadly speaking, FPGA-based ML model accelerators are in an interesting space right now, where they aren't particularly compelling from a performance (or perf / Watt, perf / $, etc.) perspective. If you just need performance, then a GPU or ASIC-based accelerator will serve you better - the GPU will be easier to program, and ASIC-based accelerators from the various startups are performing pretty well. Where an FPGA accelerator makes a lot of sense is if you otherwise need an FPGA anyways, or the other benefits of FPGAs (e.g. lots of easily-controlled IO) - but then you're just back to square 1 of "there's some cases where an FPGA makes sense and many where it doesn't". Besides that, a few niche cases where a mid-range FPGA might beat a mid-range GPU on perf / Watt or whatever metric is important for you.

Again, opinions are my own and all that. As someone in the space, I am very much hoping that someone - whether an ASIC startup or Xilinx / Intel come up with a "better" (performant, cheaper, easier to use, etc.) solution than GPUs for ML applications. If the winner ends up being FPGAs, that would be really really cool! Just at the moment it's not too compelling, and I'm trying to be realistic.

All that said, FPGAs and their related supports (software, boards, etc.) are an $Xb / Y market - nothing to shake a stick at, and there are many cases where an FPGA makes sense. Just doesn't currently make sense for every dev to buy an FPGA card to drop in their desktop to play with.


>come up with a "better" (performant, cheaper, easier to use, etc.) solution than GPUs for ML applications

you probably are aware but Xilinx themselves is attempting this with their versal aie boards which (in spirit) similar to GPUs, in that they group together a programmable fabric of programmable SIMD type compute cores.

https://www.xilinx.com/support/documentation/architecture-ma...

i have not played with one but i've been told (by a xilinx person, so grain of salt) the flow from high-level representation to that arch is more open

https://github.com/Xilinx/mlir-aie


Fascinating, thank you! Admittedly I don't keep the closest tabs on what Xilinx is doing.


Yeah a co processor with fpga on it or even an expansion card I can buy at micro center for 100 bucks would be great!


Or even better, a useful FPGA that is inside the Ryzen processor itself. It might be very small, but if it was standardized, had an easier way to program it, and programs could load acceleration routines into it, that would be so cool.


This is what I’ve been thinking about for ages. It could be a huge accomplishment and greatly improve efficiency for scientific computing, data analytics and OLAP, cloud gaming, general backend development, etc


Intel has done the equivalent with some Xeon processors.

The chiplet design of current AMD CPUs should make this even easier to do.


PC are only open due to IBM's failure to prevent Compaq's endeveours.

If anything, we have seen the whole industry moving back to those days as means to get out of razor thin margins, specially now that desktops are a very niche market for most consumers.


>If anything, we have seen the whole industry moving back to those days as means to get out of razor thin margins,

Yes. It is interesting now that Apple is sort of like the new IBM.


What’s wrong with current state except that the chips have lead time of 52+ weeks?


The development side. Compiling and simulating your Verilog/VHDL can be done with open source software, but to put it on the FPGA itself, you generally need closed source (and sometimes paid) tools to generate the bitstreams. Contrast that with microcontrollers such as the ATMega which can be programmed from start to finish using an entirely FOSS stack - even the bootloader and programmer. And for some reason, these companies consider the bitstream formats trade secrets and refuse to document them at all.


this is true in general but

1) vivado webpack edition (ie free) lets you write (and flash) a bitstream for some of the small chips. i know it at least works for the artix-7 family because i'm doing it every day lately

2) for the artix-7 (and some lattice chips) you supposedly can use OSS (https://github.com/SymbiFlow/prjxray). i haven't tried it yet but one problem i can foresee is that the OSS tools won't infer stuff like brams and dsp. in fact the symbiflow people (i think?) explicitly call this out as the part of the project that's a work in progress.

some useful links:

https://arxiv.org/abs/1903.10407

https://github.com/YosysHQ/nextpnr

https://www.rapidwright.io/


> and some lattice chips

Lattice has been by far the favorite of the FOSS community, but there's been more news:

- https://github.com/YosysHQ/apicula has appeared for Gowin FPGAs found on e.g. Sipeed Tang Nano boards (very cheap on AliExpress) - a vendor called QuickLogic made SoCs that only use the FOSS toolchain for the FPGA part, out of the box: https://www.quicklogic.com/products/soc/eos-s3-microcontroll...


>Lattice has been by far the favorite of the FOSS community

i'm interested in the OSS flows but i haven't dug in yet. so some questions (if you have experience): isn't it only for their ice40 chips? and how smooth is the flow from RTL to bitstream to deploy?

one hesitation i have with jumping in is that i'm working on accelerator type stuff, so my designs typically need on the other of 30k-50k LUTs. will yosys+nextpnr let me deploy such a design to some chip?


I don't have that much experience (don't really have many use cases for FPGAs personally tbh) but:

Icestorm is for iCE40, Trellis is for ECP5 (which comes in variants up to 85k LUTs);

the flow is simple enough to do manually but there are things that make it one-click. This tutorial series https://youtube.com/playlist?list=PLEBQazB0HUyT1WmMONxRZn9Nm... uses one.

As for handling really big designs, I don't know.


I was productive using the previous generation Xilinx toolchain (ISE) within a few months of starting from scratch. I had hobbyist electronics experience but other than that was coming from a pure software background.


Recieved approval from the Chinese anti-trust body. Still waiting for the FTC sign-off, but that should be a rubberstamp at this point.

This acquisition brings AMD up to par with Intel's market share in the chip market.


To be fully honest, I don't understand why companies do so much business in China. Yes, there's money, but there's so much danger and risk involved it increasingly looks like betting on Bitcoin for your retirement.

An example of this would be what happened to the ARM China division, which has gone completely rogue and the Chinese government couldn't care less.

There's also the fact that, Intel and AMD for making the most high-performance parts should have the upper-hand. If they refused to do business in China, the Chinese government and citizens would be forced to mass-import their CPUs and not be able to control them. ARM is not even close to ready to run Chinese data centers.

Of course, this would incentivize China to immediately do everything they could to replace Intel and AMD, but they could be forced to grovel in the meantime. And Intel and AMD can feel comfortable they aren't supporting genocide (at least according to the UN).


Companies aren't to blame. Our lawmakers are. They need to pass laws that protect US interests but for last 3 decades, they've sold the American worker's soul to the CCP & private-gov enterprise in China. Apple spent $275 billion to boost Chinese manufacturing, but fuck-all to do anything in US. They could have built a whole ecosystem of US electronics manufacturers and make everything in US + Mexico, but their shareholders will never vote for it. So, laws must be passed to force them. The revolving door in DC has crushed the working class people of US and the same goes on in EU, the west generally.

[1] https://www.theinformation.com/articles/facing-hostile-chine...


> They need to pass laws that protect US interests but for last 3 decades, they've sold the American worker's soul to the CCP & private-gov enterprise in China.

Companies are not blameless - it's a 2-step process. Lawmakers sold the American worker's soul to capital (companies), and the companies sold them on to China, with a fat markup.

The motivations are clear: politicians get donations and cushy jobs, the company executives get very rich due to increased short-term profits and increased valuations on the hope of establishing a foothold in a humongous growth market, China gets tech transfer, exports and a skilled workforce, and American workers get nothing, save perhaps a tiny sliver of company ownership through a 401k, if they are lucky to have one.


Typical American narcissism.

The reality is that China's farsighted rulers don't want to be a US client state/ colonial possession and have imposed strict rules on market participants in exchange for access to their 1.5 billion consumer market. American businesses can put up or shut up.


The current FTC and DoJ leadership are the most anti-trust since the (first) Roosevelt administration, it seems (nevermind that they were established after the fact, I'm being hyperbolic)


Why?

Has Intel done anything with Altera? I’m curious.

AMD’s acquisition of ATI is generally viewed as a strategic error. Does this make more sense?


> AMD’s acquisition of ATI is generally viewed as a strategic error.

Historically it was (when AMD was drowning in debt from the acquisition and had no competitive processors on the timeline), but is this still true? For instance, all of the major game consoles (ps4/ps5/xbone/xbox series x/switch) use AMD boards, and as I recall, those deals lifted the company's financials quite well.


It's complicated. The margins on console chips are lower than PC margins. When AMD had spare capacity (GloFo), that was great. But with everyone at TSMC and AMD demand high, those wafers could be used for more profitable chips. But the console deals definitely helped AMD in the dire times.

IMO, the strategic error was AMD paying a bunch of cash for ATI, instead of an all stock deal. In 2006, with their stock near the all time high (~$40) AMD decided to borrow $2B instead. Six months later, AMD is issuing shares at $14 to meet expenses. Two years later, AMD is trading at $1.50, and on the verge of bankruptcy. In retrospect, AMD's outcome could have been vastly different if they had done the all-stock deal in 2006, or even issued shares instead of borrowing.


It's possible they offered shares behind closed doors and were turned down though.


Right, but AMD could have issued shares to the public, then paid cash with the proceeds, if ATI insisted. We don't really know what happened.


AMD still uses GloFo


Only for IOD and some low end APU.


Agreed, although Switch uses NVidia Tegra, not AMD this generation.


For now they do, honestly it would make more sense to maybe consider an AMD SoC.


I would be a little surprised if they switched off of ARM. Even the Steam Deck with its custom AMD SoC is much closer to laptop specs and wattages than the switch e.g. 20w+ SoC, more than 2x battery capacity vs Switch. The switch has a lot more in common with phones than it does with other consoles or PC portables so IDK if we will see AMD SoCs with the right power profile for nintendo's needs.

Nintendo could use one of the new Samsung SoCs with RDNA2 graphics though. Those could be pretty good competition for Tegra X* chips.


I said that it would make more sense because they would open their platform to a lot more games, maybe it would cut a lot of dev work, and they'd might to it with competitive prices (like the Steam Deck has shown).

Now, Nintendo doesn't care much about that because they're into their own thing, they don't even see themselves as a tech or gaming company, the position themselves as an entertainment company.

Like, I think they see themselves closer to Disney than to Sony Computer Entertainment.

About the power profile, the switch came out with around 2h-3h battery life while gaming, depending on the game of course, I don't think it would be much of a problem.


Half or so of all Switch games are already made with Unity and I don't think Switch not being x86 is the reason most other games are not ported to Switch.


AMD has ARM licenses (IIRC, both architectural and for IP cores) and has shipped an Arm Cortex-A52 SoC in the past (Opteron A1100). If Nintendo really wanted, they might be able to order an Arm SoC from AMD directly. But yeah, going with Samsung would be far more practical.

Problem is, many existing Switch games use some nvidia-specific API (IIRC called "NVN" or whatever) instead of Vulkan.


It's interesting to think about. I'm not so sure myself; yes, an AMD chip would probably yield better GPU performance, but people forget that Nvidia just flat-out has the better software stack. Would games like Red Faction have made it to Switch without CUDA physics? Would No Man's Sky be viable on Switch without leveraging it's massive optimization stack for Nvidia hardware? How about the Borderlands collection and it's extensive use of PhysX?

I can see where the case for AMD could be made (the Steam Deck's APU is truly impressive), but I think Nintendo has invested pretty heavily into the Nvidia stack at this point. The Tegra boards that they made the original Switches with were left over from the Nvidia Shield; nobody was sure if the concept would take off or not. Now that the Switch has outsold the Wii, I'd imagine Nvidia and Nintendo are working very closely to design a successor to the chip, even if it's not going to be a mobile console.


> Would No Man's Sky be viable on Switch without leveraging it's massive optimization stack for Nvidia hardware? How about the Borderlands collection and it's extensive use of PhysX?

To be fair, developers have been making games run on underspecced AMD hardware for more than a decade at this point between the XBox One and PS4, and any games coming to PC have to have AMD alternatives to CUDA physics, PhysX, etc. anyway.

Also, its not like the AAA ports have been without issues, many of them run poorly and look pretty awful compared to first party efforts like BotW. Their optimization stack might be amazing, but recycling the shield SoC AND underclocking it is definitely causing some pain.

It does seem like Nintendo has buddied up to NVidia more than console manufacturers usually do so I think its likely they stick with NVidia but I wouldn't write off the possibility entirely, especially since AMD seems so willing to please with custom solutions even for much lower volume products like the Steam Deck.


Yeah, but even if you missed on some of those games that were made with the help of marketing budgets from NVIDIA (to use such proprietary tech), would on the other hand opened potentially the door to thousands of other games (if they went the x86 rout).

Probably one the of the best things that happened to the gamedevs, console wise, was Microsoft and Sony going for x86 and AMD SoC.

What you're saying, which is true, is that Nintendo likes to cut costs and it's cheap on the hardware they use because they care more about the content for their IP, not much for the tech behind it. They could keep stretching the switch for 3/5 or more years if they wanted to.


Ah, sorry, my mistake!


As already mentioned, not the Switch.

A few of these are pre-acquisition but the GPU's for the xbox 360, Gamecube, Wii, and Wii U were all ATI/AMD as well.

I'm sure at this point more console hardware has been AMD than any other manufacturer.


Console deals are terrible margin - there is a reason AMD won all those deals. If you have competitive chips, you would rather use your silicon otherwise.


Terrible margin for the console manufacturer (Sony, MS, Nintendo).. do we have any evidence that they're terrible margin for component manufacturers? Presumably these consoles wouldn't be sold at a loss if the components were bargain basement as you imply?

Also if we look at the latest gen, there's some pretty cutting edge architecture and manufacturing processes used in the new xbox/playstations. Zen2 is no slouch.


Everything I've heard about the Altera deal is that Intel has basically just been milking existing customers, rather than attracting new ones. If AMD does more with Xilinx sooner than they did with ATI[1] it could be good strategic move. If they start making FPGAs standard on their CPUs (short term on the enterprise stuff, longer term on everything) in the same way FPUs are, this could provide them with a significant advantage both in CPUs and GPUs. The other half of the equation would be to open up the FPGAs enough to allow 3rd party tool chains so that they could be at least as open as CPU ISAs. If they keep the Xilinx (and most other FPGA manufacturers) business model of 100% proprietary devices and toolchains, I think people will look back on this acquisition years from now the same way they do ATI and ask 'why?'. Basically it seems rather smart if they want the tech to enhance their existing offerings but rather dumb (especially for the price paid) to keep the FPGA business as a silo.

[1] The rationale of getting into the GPU business made a lot of sense but here we are 15 years later and it's arguable AMD has largely squandered the opportunity. Even in APUs where they should be slaughtering Intel, AMD continues to hobble them by putting previous gen GPU cores in them. This was during a decade and a half when Intel's iGPUs were fairly weak and nVidia had nothing to counter with... smart move AMD!


I don’t think the ATI acquisition was an error, integration has been harder than expected. The game concsoles would have been difficult wo having a world class GPU available.

The work they did around HSAIL and APUs shows that they have the creativity and the will.


I think integration was always going to be hard and most people who understood the problem knew that it was going to be hard/impossible but they went ahead with it anyways.

At the time they were talking about integrated compute units between CPU and GPU, mapping GPU register space into CPU address space, linking/unlinking CPU and GPU pipelines, etc etc. And the CPU and GPU teams would be working on this while also having to produce top-shelf stand-alone CPUs and GPUs to fight on 2 separate fronts against entrenched and capable competitors. It didn't really seem possible or even desirable at the time.

I also don't understand why having a GPU would improve their chances with console vendors. Those vendors could have still selected AMD CPUs but cut separate deals for the GPU with other companies. This is a business AMD is already and was already in. They do this in all sorts of markets like network switches.


Having an existing APU line makes the chances of success a whole lot higher.

I don't know what the grand plans were to have CPU and GPU teams both in a multiple level deep tick tock pipeline (organizations are structured like CPUs or CPUs are structured like assembly lines ... pick your causal metaphor). That would require the right fabric, I don't know if they have that yet. There is a ton of research that shows that a mediocre GPU on the same memory bus as a CPU has a huge number advantages over a GPU on the PCIe bus, even at x16. With HBM, hell put the GPU in the HBM. All HSAIL the HBM!

CPU, GPU and FPGA will all merge into a computational goo. With AMD acquiring Xilinx and their past with APUs and HSA, I stand by my statement. AMD has been stalling on the G series Ryzen parts because I think they don't want to cannibalize their discrete GPU business, but CPUs have more than enough compute and memory bandwidth to support software geometry and rasterization.

https://moorinsightsstrategy.com/the-real-reasons-microsoft-...


Intel sold power electronics group(?) for 85 million dollar, that Altera bought for 140 million: https://www.anandtech.com/show/16257/mediatek-subsidiary-to-...

I also know that many clients ditched SoCs with ARM cores from Altera when Intel acquired it being afraid that Intel does not like these alien cores.


Is it still? I thought the ability to produce semi-custom APUs for game consoles more or less saved AMD when they were unable to compete in the CPU market.


> Has Intel done anything with Altera? I’m curious.

To me Intel taking over Altera and removing the Altera name was a mistake. I don't associate the name "Intel" with FPGA's nor do I think they have a real interest in them beyond server acceleration. To me, Altera and their product line are dead.

I don't know what AMD will do to Xilinx but I hope it isn't more of the same hype driven nonsense of ML/AI/Crypto/Cloud chasing. And it will be sad to see the Xilinx name disappear.


Opinions on it are definitely mixed. It's an all stock deal that was made at a very reasonable price, but 1.5 years on they are overpaying by a huge amount. Xilinx has been doing very well compared to Intel's FPGA division, so you can try to justify it, but from a shareholder perspective it's very dilutive.


There are some interesting datacenter interconnect implications, especially around the new CXL spec.

https://en.wikipedia.org/wiki/Compute_Express_Link


I see it as a data center play. SmartNic, DPU, etc. I'd love to play with a FPGA co-processor in my workstation, but I doubt that is the goal.


I hope they won't kill their CPLD line that has been obsoleted several times and then pushed back. It's a lifesaver for designing digital circuits if you don't want to use a ton of 74 chips.


It was my understanding Lattice's FPGAs / CPLDs are more tailored for the smaller (possibly hobbyist scale) EEs out there.

Bonus points: Lattice has turned a blind eye at the open source community, meaning Lattice's FPGAs / CPLDs have open-source toolsets. Kinda sad that "blind eye" is the best we can hope for from these companies, but that's how it is for now...


Not all these companies.

QuickLogic openly embraces open source tooling.for their FPGAs.


I was awarded an FPGA chip by Xilinx that I blogged about: http://msapaydin.wordpress.com They are making an effort to promote the use of Xilinx fpga's on machine learning, recently.



Yes, thank you. Sorry I am on mobile, traveling and could not post the full link.


This is bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: