At the traditional 96 dpi, you have to be 3 ft away to exceed the retinal density. Personally, I sit at half that distance. Something around 200 would be more ideal. Laptops you might sit even closer.
Mobile devices, unless you get really close to the screen, have matched the retinal density for a while. Most people hold the device at about 8 inches, so 450 dpi is the value to hit.
Edit These measurements assume 20:20 vision, which is the average. Many people exceed that. So you'd need slightly higher values if you're feeling pedantic.
Having the focal point up close for a long time isn't that good for the eyes, so
sitting closer than an arms length to a desk monitor isn't an idea that lasts well.
100 dpi with subpixel rendering already maxes out angular resolution (horizontal). It doesn't max out everything (retinal), so you still see some artifacts, but practically this is not that relevant. The price in energy/bandwidth rises quadratic for very little gain.
To get the equivalent of 4K at 100 ppi - with 200 ppi you have to put the burden of 8K onto the GPU... For now this is absolutely not good - High ppi is ok for small monitors and handheld devices, but for a decent desk with several good monitors GPUs just aren't ready yet.
For function-multiversioning, the intrinsic headers in both gcc and clang have logic to take care of selecting targets. You also don't need to do dispatch manually when writing manual optimizations--the same function name with different targets is supported and dispatches automatically.
> Things like the famous fast inverse square root are short, but I would hesitate to describe it as simple.
Not the best example. That snippet was in use at SGI for years and actually written by Gary Tarolli. Quake's optimization was mostly done by Michael Abrash.
The original id engines were also famously inflexible. They fit the mold of "developing an engine, not a game" to a T. What you saw them do was all they could do. Look at how much Half-Life needed to add to be viable. idtech3 also only broke out of its niche because Ritual and Infinity Ward heavily modified it and passed it around. There's a good reason the engine-based ecosystem is so prominent now.
What changes were needed in Half-Life? Quake seemed okay enough to modify though it was rushed and getting close to what was possible with the hardware at the time.
From the beginning, one of the advertising tricks they have used for AI is FOMO. I presume that is so they can sell you as much of it as they can before you realize its flaws.
Everybody's so worried about getting in on the ground floor of something that they don't even imagine it could be a massive flop.
Did you pay attention in computer science classes? There are problems you can't simply brute-force. You can throw all the computing power you want at them, but they won't terminate before the heat-death of the universe. An LLM can only output a convolution of its data set. That's its plateau. It can't solve problems, it can only output an existing solution. Compute power can make it faster to narrow down to that existing solution, but it can't make the LLM smarter.
Maybe LLMs can solve novel problems, maybe not. We don't know for sure. It's trending like it can.
There are still plenty of problems that having more tokens would allow them to be solved, and solved faster, better. There is no absolutely no way we've already met AI compute demands for the problems that LLMs can solve today.
There is zero evidence that LLMs can do anything novel without a human in the loop. At most LLM is a hammer. Not exactly useless by any stretch of the imagination, but yes you need a human to swing it.
Not really. You can leverage randomness (and LLMs absolutely do) to generate bespoke solutions and then use known methods to verify them. I'm not saying LLMs are great at this, they are gimped by their inability to "save" what they learn, but we know that any kind of "new idea" is a function of random and deterministic processes mixed together in varying amounts.
Everything is either random, deterministic, or some shade of the two. Human brain "magic" included.
Microsoft doesn't seem to care unless you're a company. That's the reason community edition is free. Individual licenses would be pennies to them, and they gain more than that by having a new person making things in their ecosystem. It's in their interest to make their platform accessible as possible.
PulseAudio, when it came out, was utterly broken. It was clearly written by someone with little experience in
low-latency audio, and it was as if the only use case was bluetooth music streaming and nothing else. Systemd being from the same author made me heavily averse to it.
However, unlike PulseAudio, I've encountered few problems with systemd technically. I certainly dislike the scope creep and appreciate there are ideological differences and portability problems, but at least it works.
If Rust has one weakness right now, it's bindings to system and hardware libraries. There's a massive barrier in Rust communicating with the outside ecosystem that's written in C. The definitive choice to use Rust and an existing Wayland abstraction library narrows their options down to either creating bindings of their own, or using smithay, the brand new Rust/Wayland library written for the Cosmic desktop compositor. I won't go into details, but Cosmic is still very much in beta.
It would have been much easier and cost-effective to use wlroots, which has a solid base and has ironed out a lot of problems. On the other hand, Cosmic devs are actively working on it, and I can see it getting better gradually, so you get some indirect manpower for free.
I applaud the choice to not make another core Wayland implementation. We now have Gnome, Plasma, wlroots, weston, and smithay as completely separate entities. Dealing with low-level graphics is an extremely difficult topic, and every implementor encounters the same problems and has to come up with independent solutions. There's so much duplicated effort. I don't think people getting into it realize how deceptively complex and how many edge-cases low-level graphics entails.
> using smithay, the brand new Rust/Wayland library
Fun fact: smithay is older than wlroots, if you go by commit history (January 2017 vs. April 2017).
> It would have been much easier and cost-effective to use wlroots
As a 25+ year C developer, and a ~7-year Rust developer, I am very confident that any boost I'd get from using wlroots over smithay would be more than negated by debugging memory management and ownership issues. And while wlroots is more batteries-included than smithay, already I'm finding that not to be much of a problem, given that I decided to base xfwl4 on smithay's example compositor, and not write one completely from scratch.
Thanks for the extra info. I'm glad it hasn't turned out to be much of an issue. I've looked at your repository and it seems to be off to a great start.
Personally, I'm anxious to do some bigger rust projects, but I'm usually put off by the lack of decent bindings in my particular target area. It's getting better, and I'm sure with some time the options will fill out more.
There really isn't a "massive barrier" to FFI. Autogenerate the C bindings and you're done. You don't have to wrap it in a safe abstraction, and imo you shouldn't.
This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem. Keeping the whole “it’s funded by the Government/Google etc” nonsense aside: I personally wish that at least a feeble attempt would be made to actually use the FFI capabilities that Rust and its ecosystem has before folks form an opinion. Personally - and I’m not ashamed to state that I’m an early adopter of the language - it’s very good. Please consider that the Linux kernel project, Google, Microsoft etc went down the Rust path not on a whim but after careful analysis of the pros and cons. The pros won out.
> This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem.
I have done it and it left a bad taste in my mouth. Once you're doing interop with C you're just writing C with Rust syntax topped off with a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer. It's unergonomic and you lose the differentiating features of Rust. Writing safe bindings is painful, and using community written ones tends to pull in dozens of dependencies. If you're interfacing a C library and want some extra features there are many languages that care far more about the developer experience than Rust.
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
You just have to get over that. `unsafe` means "compiler cannot prove this to be safe." FFI is unsafe because the compiler can't see past it.
> Once you're doing interop with C you're just writing C with Rust syntax
Just like C++, or go, or anything else. You can choose to wrap it, but that's just indirection for no value imo. I honestly hate seeing C APIs wrapped with "high level" bindings in C++ for the same reason I hate seeing them in Rust. The docs/errors/usage are all in terms of the C API and in my code I want to see something that matches the docs, so it should be "C in syntax of $language".
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
That's bizarrely emotional. It's a language feature that allows you to do things the compiler would normally forbid you from doing. It's there because it's sometimes necessary or expedient to do those things.
My point is that using C FFI is "the things the compiler would normally forbid you from doing" so if that's a major portion of your program then you're better off picking a different language. I don't dislike rust, but it's not the right tool for any project that relies heavily on C libraries.
The GPUs have become much larger, so 6.8x is believable there, as is the inclusion of a matmul unit boosting AI.
The 2.x numbers are the most realistic, especially because they represent actual workloads.
reply