It is not clear at all. Also there are no conclusions, it's purely a waste of time, basically the story of a guy figuring out for no reason that the way maps are implemented has changed in Go.
And the title is about self-hosted compilers, whose "advantage" turned out to be just that the guy was able to read the code? How is that an advantage? I guess it is an advantage for him.
The TypeScript compiler is also written in Go instead of in TypeScript. So this shouldn't be an advantage? But this guy likes to read Go, so it would also be an advantage to him.
I agree that the article is a bit unfocused about the supporting material. But the primary topic is clear: it's about the memory consumption of the Go map implementation.
This is an article written by a real human person, who's going to meander a bit. I prefer that over an LLM article which is 100% focused, 100% confident, and 100% wrong. Let's give the human person a little bit of slack.
I think it is quite obvious - the author has found out that a memory trick that used to work in previous Go versions no longer works - in this sigular use case.
I read somewhere that CPUs are better at generating graphics than GPUs (although I imagine much slower). Is that true? Does that explain why GUI libraries like Egui are so much uglier than, for example, Iced?
The main context where I've seen claims that CPUs are 'better' at graphics is where a render that looks precisely right and has the desired image quality is more important than a fast render.
Even that is more about control over the rendering process than what silicon is doing the work. With a lot of (older?) graphics APIs, you're implicitly giving the GPU's driver a lot of control over the output and how it is generated. This is how we got events like [0] where GPU vendors would make their drivers 'cheat' on benchmarks by trading image quality for speed when certain criteria were detected.
I imagine that tradeoff has changed somewhat as the industry has moved towards graphics APIs intended to give the programmer more direct control of the hardware.
I imagine the answer is "Higher quality" or "Better customization". You can get extremely precise control over the render pipeline on a CPU since you can calculate pixels however you want.
But...with today's world of pixel shaders (Really, a world that's existed for 10+ years now), I'd be surprised if there's actually any benefit to be had these days. With a proper pixel shader, I doubt there's anything you could do on a CPU that you couldn't do on a GPU, and the GPU would be massively parallel and do it much faster.
You give my understanding in your last sentence there. I don't think there's any "higher quality" graphics which could be rendered on a CPU that couldn't be rendered on a GPU. Since they are equivalent in their possible actions, the only differential would be speed, which is what GPUs are designed for.
But to play devil's advocate against myself, I have heard that programming for GPUs can be harder for many things. So maybe usability and developer-friendliness is what is meant by CPUs being better?
GPUs are TERRIBLE at executing code with tons of branches.
Basically, GPUs execute instructions in lockstep groups of threads. Each group executes the same instruction at the same time. If there's a conditional, and only some of the threads in a group have a state that satisfies the condition, then the group is split and the paths are executed in serial rather than parallel. The threads following the "true" path execute while the threads that need to take the "false" path sit idle. Once the "true" threads complete, they sit idle while the "false" threads execute. Only once both threads complete do they reconverge and continue.
They're designed this way because it greatly simplifies the hardware. You don't need huge branch predictors or out-of-order execution engines, and it allows you to create a processor with thousands of cores (The RTX 5090 has over 24,000 CUDA cores!) without needing thousands of instruction decoders, which would be necessary to allow each core to do its own thing.
There ARE ways to work around this. For example, it can sometimes be faster to compute BOTH sides of a branch, but then merely apply the "if" on which result to select. Then, each thread would merely need to apply an assignment, so the stalls only last for an instruction or two.
Of course, it's worth noting that this non-optimal behavior is only an issue with divergent branches. If every thread decides the "if" is true, there's no performance penalty.
The simple calculations typically used for rendering graphics can easily be parallized on the GPU, hence it's faster. But the result should be identical if the same calculations are done on the CPU.
Also GUI frameworks like iced and egui typically support multiple rendering backands. I know iced is renderer agnostic, and can use a number of backands including the GPU graphics APIs Vulkan, DX12 and Metal.
At least in the case of Nostr, the introduction text is definitely written for someone that understands tech vocab:
Nostr is an apolitical communication commons. A simple standard that defines a scalable architecture of clients and servers that can be used to spread information freely. Not controlled by any corporation or government, anyone can build on Nostr and anyone can use it.
Wasn't it better when only .com mattered? There are thousands of TLDs now and that forces companies to buy multiple, these domain names are not even memorable anymore specifically because of the TLD part.
Well, it depends. If all you were interested in was getting a "good" (e.g., short) name in .COM, no.
In the late 90s, when NSF allowed Network Solutions to charge for domain names, people complained that they (now Verisign) had a monopoly, so after a number of fine lunches and dinners in far off exotic places (see https://en.wikipedia.org/wiki/IAHC), there was a proposal to create more top-level domains, created the registry/registrar split, proposed the Uniform Dispute Resolution Policy (primarily for Intellectual Property owners), etc. Then, the US government stepped in and started a process that led to the creation of ICANN.
The whole point of this exercise was to introduce competition into the domain name system. It did with the registry/registrar split and tried with the registries by having multiple rounds of a limited number of new top-level domains. However, the latter was kind of stupid (IMHO): the switching costs for changing TLDs is way too high for the existence of new TLDs to significantly impact Verisign's monopoly -- instead, it created a bunch of monopolies.
However, people weren't happy with the "limited number" part of ICANN's efforts to introduce competition in the TLD space, so in 2012, the ICANN community (which anyone can be a part of) opened the flood gates, removed the arbitrary restrictions on how new top-level domains could be created, and we now have over 1500 TLDs.
It's still basically only .com that matters. There are those few others that matter commercially (setting aside the org for nonprofits and edu and gov), such as .io, but for every 'clever domain' startup, I see 6 more who, unable to get say, frog dot com, go with something like "usefrog dot com" or "tryfrog dot com" in their early days and come back and snap up frog dot com after they get their Series C. They could have gotten say, frog dot legal or frog dot engineering but nobody wants those.
I don't know if these work or not for the specific case mentioned here, but the cheapest eSIMs by a huge margin are from https://silent.link/ if anyone is interested. They definitely do work under normal internet circumstances.
How can it be that one person living in Indonesia says everything is blocked and the country is in chaos and another, very calmly, is completely unaware and can't even find any news about it? This is so odd. What is the truth?
The context that was likely left out due to HN rules is, there are mass protests turned violent in the face of police brutality in several cities. The Indonesian government has a history of blocking/throttling internet access in immediate areas of the unrest to limit coverage.
Indonesia is a big country with over ten thousand islands and uneven coverage. What is blocked on one ISP might not be enforced on another (e.g. the state-owned ISP might block or use DNS poisoning on several "non-compliant" DNS providers but my current ISP doesn't). Also, in addition to what the sibling commenter (and another commenter regarding Cloudflare outage) said there might be a general overload on the mobile network near the affected areas since there are lots of users and limited bandwidth.
It's a 270 million people country with over 10K+ islands. Last year I visited Borobudur and was surprised that the Yogjakarta region is autonomous and they have their own king.
reply