functional programming has a lot of wonderful concepts, which are very interesting in theory, but in practice, the strictness of it edges on annoying and greatly hurts velocity.
Python has a lot of functional-like patterns and constructs, but it's not a pure functional language. Similarly, Python these days allow you to adds as much type information as you want which can provide you a ton of static checks, but it's not forced you like other typed languages. If some random private function is too messy to annotate and not worth it, you can just skip it.
I like the flexibility, since it leads to velocity and also just straight up more enjoyable.
is that per Capita? Also, At least they are going in the right direction with most metrics (switching to electric, installing renewable, planting trees, etc), whereas the US (under Trump) is hellbent on getting rid of renewables, focusing on coal/fossil fuel, slowing down electric cars, destroying national parks, etc.
One piece I'd like to see more clarification on is, is he doing multiple samples per pixel (like with ray tracing?). For his 1280x720 resolution video, that's around 900k pixels, so at 30Khz, it would take around 30s to record one of these videos if he were to doing one sample per pixel. But in theory he could run this for much longer and get a less noisy image.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
It's humbling how well-rounded Brian (and other Youtubers such as Applied Science and StuffMadeHere, HuygensOptics) is on top of clearly being a skillful physicist: electronics, coding, manufacturing, ... and the guy is _young_ compared to the seasoned professionals I mentioned in the parentheses.
If he randomized the position with blue noise or something he could use compressed sensing or ai denoising for many less samples. The raw image wouldn't be as good but by the time it is compressed it should much better for the same sample count. It might not be as easy to move the mirror in both axes between each sample though.
edit: saw below he is using a continuous scan so randomizing it probably wouldn't be workable
In one of the appendix videos he mentions that would improve the noise, the issue is the data rate exporting from the scope is a bottleneck and it would slow things down even more.
So you're basically saying that you can spend as much money to get a knife that will cut as well but requires regular work put into it, whereas this doesn't? I think that's the whole pitch here...
Kinda annoying that the article doesn't really answer the core question, which is how much time was saved in the start up time. It does give a 0.05ms per tooltip figure, so I guess multiplied by 38000 gives ~2s saved, which is not too bad.
"Together, these two problems can result in the editor spending an extremely long time just creating unused tooltips. In a debug build of the engine, creating all of these tooltips resulted in 2-5 seconds of startup time. In comparison development builds were faster, taking just under a second."
1. It might not be the best across all metrics today, but it definitely was a few years ago.
2. While it's true that other browsers like Firefox have been catching up to Chrome in speed, it's still true that Chrome help lead the way and if not for it, the web would've likely been far slower today.
3. There has been an explosion in other browsers in the past few years, but admittedly they're all chromium-based, so even that wouldn't have been possible without Chrome
Safari has been better for going on 5 years now, funny thing is it was worse for long enough that it seems everyone, even to this day, refuses to believe it.
Faster in basically every dimension. Supporting way more than FF in terms of specs. Way more efficient on battery. Better feeling scroll, better UI.
Chrome caught up in the last year or so, but also speedometer is also fairly arbitrary. Open/close, tab open/close, tab switch, scroll, initial load, resizing all still far better. Actual app performance depends on the app but for a few years Safari was clearly better.
100% objective, in fact among better web developers this has been common knowledge. There were plenty of articles, side by sides and benchmarks over the last years showing it.
It has gotten absolutely out of control. I will be reading an article about a new game, and the article won't even have a link to the store page to buy the game...
Which store page should they be linking to? Inevitably what you’re asking for is how we’ve ended up with sites spinning off thousands of articles stuffed full of affiliate links.
Just link to a few? There's a finite set of stores a game is usually on
On PC, it'll be Steam, GOG, maybe Humble. Then on consoles you have Xbox, Playstation and Nintendo. If you wanna put affiliate link, go for it. It's better than no link at all.
These articles already bait my click for ads by never putting the name of the game in the title anyways. At least let me get to the game and buy it.
Before a model is announced, they use codenames on the arenas. If you look online, you can see people posting about new secret models and people trying to guess whose model it is.
You can use: https://news.ycombinator.com/front?day=2025-12-04 to get the frontpage on a given date.
reply