Link technologies in today’s data center networks impose a fundamental trade-off between reach, power, and reliability. Copper links are power-efficient and reliable but have very limited reach (< 2 m). Optical links offer longer reach but at the expense of high power consumption and lower reliability. As network speeds increase, this trade-off becomes more pronounced, constraining future scalability. Weintroduce MOSAIC,anoveloptical link technology that breaks this trade-off. Unlike existing copper and optical links, which rely on a narrow-and-fast architecture with a few high-speed channels, MOSAIC adopts a wide-and-slow design, employing hundreds of parallel low-speed channels. To make this approach practical, MOSAIC uses directly modulated microLEDs instead of lasers, combined with multicore imaging fibers, and replaces complex, power-hungry electronics with a low-power analog backend. MOSAIC achieves 10× the reach of copper, reduces power consumption by up to 68%, and offers 100× higher reliability than today’s optical links. We demonstrate an end-to-end MOSAIC prototype with 100 optical channels, each transmitting at 2 Gbps, and show how it scales to 800 Gbps and beyond with a reach of up to 50 m. MOSAIC is protocol-agnostic and seamlessly integrates with existing network infrastructure, providing a practical and scalable solution for future networks.
If I peed in my pant (or in a similar awkward situation) and someone happened to have filmed it. What kind of legal rights (if any) do I have to remove my shameful moment from the Internet?
Great article that gets to the key points.
The problem with "persistent memory bus" is a very bad sign by itself. This means no existing software/systems can benefit from the feature without writing some new code between the volatile/persistent boundry. It's too much of an effort for a "not much faster" device.
RIP.
It's the ideal cooker for me. I take shower only once per week and most of my gas cost went to cooking. It will save ~75% of by gas meter reading (but only ~15% on my gas bill though).
Just tried to insert 100M KV pairs of the same data format in a KV store. It takes about 26s on a Xeon Silver 4210, including fully writing down the data to the filesystem/SSD.
I started to use const whenever possible after being familiar with some compiler optimizations and the Haskell pl. The point is knowing how to give the compiler an easier job.
Sending those patches is just disgraceful. I guess they're using the edu emails so banning the university is a very effective action so someone will respond to it. Otherwise, the researchers will just quietly switch to other communities such as Apache or GNU. Who want buggy patches?
In addition to code reduction, this may also help reducing occupied branch prediction slots (mostly x86 I guess?). Say two conditional branches have 50/50 chance which does not benefit from branch prediction. Them merging the code with outlining can reduce them to one conditional branch instruction. Since it's still "unpredictable" during execution, one prediction slot is saved for free.
You're not a VIM's hostage. You're just following a very natural way of navigation--it's very easy to learn and you don't easily forget it.
A very different story is using some other shortcuts like "F5" for single step and "F6" for continue etc.. I always forget about these after 2 weeks not using it.