Hacker Newsnew | past | comments | ask | show | jobs | submit | kderbe's commentslogin

Steve Burke from GamersNexus tested eight games from their benchmark suite on Linux last month. Although his conclusion was generally positive, there were problems with nearly every game:

- F1 2024 didn't load due to anti-cheat

- Dragon's Dogma 2 and Resident Evil 4 had non-functional raytracing

- Cyberpunk 2077 with raytracing on consistently crashes when reloading a save game

- Dying Light 2 occasionally freezes for a whole minute

- Starfield takes 25 minutes to compile shaders on first run, and framerates for Nvidia are halved compared to Windows

- Black Myth: Wukong judders badly on Nvidia cards

- Baldur's Gate 3 Linux build is a slideshow on Nvidia cards, and the Windows build fails for some AMD cards

If you research these games in discussion forums, you can find some configuration tweaks which might fix the issues. ProtonDB's rating is not a perfect indicator (BM:W is rated "platinum").

And while Steve says measurements from Linux and Windows are not directly comparable, I did so anyway and saw that Linux suffers a 10-30% drop in average FPS across the board when compared to Windows, depending on the game and video card.


AFAIK this comes down a lot to NVIDIA not doing enough efforts for the Linux drivers. There is a pretty well documented and understood reason for the perf hit NVIDIA GPUs get on Linux.

Honestly, considering where we came from, a 10-30% perf drop is good and is a reasonable tradeoff to consider. Especially for all the people that don't want to touch Windows 11 with a 11-foot pole (which I am), it's a more than decent path. I can reboot into my unsupported Win10 install if I really need the frames.

Really, Linux benchmarks need to be split between AMD and NVIDIA. Both are useful, as the "just buy an amd card lol" crowd is ignoring the actually large NVIDIA install base, and it's not like I'm gonna swap out my RTX 3090 to go Linux.

Thanks for the comparison! Would you have an apples to apples, or rather an NVIDIA to NVIDIA comparison instead of "across the board"? I'd suspect the numbers are worse for the pure NVIDIA comparison, for what I mentioned above.


To each their own, but Windows 11 runs flawlessly on my machine with high-end specs and a 240 Hz monitor.

The Start menu works great with no lag, even immediately after booting.

The only thing that I consider annoying would be the 'Setup' screens that sometimes show up after bigger updates.

---

Would I trade it all to get on Bazzite DX:

- lower game compatibility and potential bugs

- subpar NVIDIA drivers with the risk of performance degradation

- restricted development in dev containers relying on VS Code Remote

- Loss of the Backblaze Unlimited plan

+ system rollbacks if an update fails

---

That does not seem worth it to me.


The start menu worked 30 years ago on a 32mb of RAM and a box of scraps.

Considering I found the win10 start menu too slow, the w11 one does not stand a chance. But I'm hopeful from your comment, it shows that w11 is not the complete shitshow people make it to be, though the few times I used it on relatives computers I found it not responsive enough.

I'm testing daily-drive on my main rig (high-end from a few years ago, 5900x + 3090), and honestly I'm rediscovering my computer. A combination of less fluff, less animations, better fs performance (NTFS on NVMe is suboptimal), etc. I was getting fed up by a few windows quirks: weird updates breaking stuff, weird audio issues (e.g. the audio subsystem getting ~10s latency for any interaction like playing a new media file or opening the output switcher), weird display issues (computer locking up when powering on/off my 4k tv), and whatnot. I'm still keeping the w10 install around, as having an unsupported OS is less of a problem for the occasional game, especially since I mostly play offline games.

As for the dev env, you're not limited to bazzite, I run Arch. Well, I've been running it for two weeks on the rig. But you really get the best devex with linux.


The start menu seems to respond instantly for me.

That's on a 7950X3D with 64 GB RAM and a Samsung 990 Pro SSD. Maybe it performs worse on slower hardware.

I have 14 TB of SSDs connected, so it's not like there is no content on my PC.

Notably I don't have any HDDs connected, maybe that plays a role here.


The few win11 I've touched were all on NVMe drives, but I'm pretty sure they're fast enough for a start menu. I mean, your gear should not be needed to get a responsive start menu.

I'm curious, did you clean up what's by default in the start menu? Stuff like "recommended", "candy crush", and the likes? On the win11 I tested, those parts loaded slower than the rest, I wonder if the start menu has a timeout of "load then open".

Had I switched to win11 I'd have slapped Classic Shell on it, as I did on win10. It's a reimplementation of the win7 start menu with windows-version-appropriate design, but with win7 reactivity (opens literally the next frame, in no small parts thanks to the absence of animation).


After checking the responsiveness of the start menu earlier, I uninstalled or unpinned the useless stuff in Pinned.

I don't think it made a difference, it was already lag free before.

It's annoying they put Office Copilot and Instagram there, but it uninstalled with just two clicks per item, taking a minute or so to get rid of everything.


> The Start menu works great with no lag, even immediately after booting.

The very fact that this has to be explicitly mentioned is laughable.

Like $100 Chinese phones can achieve the same, this is the bare basic for a modern system capable of running 240Hz monitor (I assume it can do so with most games).


The start menu bug is one of the few problems that windows has, compare that to Linux.

>a 10-30% perf drop is good and is a reasonable tradeoff to consider

You are either trolling or completely out of your mind. You simply cannot be serious when saying stuff like this.


I'm not. The situation is improving rapidly, and I'd expect the gap to close soon.

I still have the windows install. And with an RTX 3090, framerate is not that much of a consideration for most games, especially since my main monitor is "only" 1440p, albeit a 144Hz one.

Couple that with GSync, framerate fluctuations is not really noticeable. Gone are the days where dipping below 60Hz is a no-no. The most important metric is stutter and 1% lows, those will really affect the feeling of your game. My TV is 120Hz with GSync too, and couch games with a controller are much less sensitive to framerate.

Do I leave performance on the table? Surely. Do I care? In the short term, no. The last GPU intensive games I played are Hogwarts Legacy and Satisfactory, both of which can take a hit (satisfactory does not max the GPU, and Hogwarts can suffer DLSS). The next intensive game I plan on playing is GTA VI, and by this time I'd fully expect the perf gap to have closed. And the game to play fine, given how Rockstar puts care on how the performance of their games, more so with the Gabe Cube being an actual target.

In the long run, I agree this is not a "happy" compromise. I paid for that hardware dammit. But the NVIDIA situation will be solved by the time I buy a new GPU: either they completely drop out of the gaming business to focus on AI, or they fix their shit because Linux will be an actual gaming market and they can't keep giving the finger to the penguin.


It’s reasonable to consider. If a title runs at 80FPS on Windows, it’ll be completely playable on Linux. Framerate isn’t everything.

It's perfectly reasonable. I actually run my Nvidia card at a 30% underclock so it works out fine for me on Linux.

Is it really crazy? Some stuff runs faster and some runs slower, and I deal with less bullshit. I was already running with 17-20% reduced power limits anyway.

> Baldur's Gate 3 Linux build is a slideshow on Nvidia cards

I played Baldur's Gate 3 on Linux on a GeForce GTX 1060 (which is almost 10 years old!) without a fan (I found later that it was broken) and I generally did not have issues (couple of times in the whole game slowed for couple of seconds, but nothing major).


The key word was Linux build. There's now an official Linux version so that BG3 runs better on Steam Deck. Everyone else should keep using Proton to run it like they've done this far.

Which applies to all the games, basically. I nowadays make sure to select Proton before even running the game for the first time, in case it has a Linux build -- that will invariably be the buggier experience so want to avoid it.


Thats the whole problem. No consistency. Some configurations work, others not - eventhough they should be way more capable.

That's not even limited to linux or gaming. A few weeks ago i tried to apply the latest Windows update to my 2018 lenovo thinkpad. It complained about insufficient space (had 20GB free). I then used a usb as swap (required by windows) and tried to install the update. Gave up after 1 hour without progress...

Hardware+OS really seems unfixable in some cases. I'm 100% getting a macbook next time. At least with Apple I can schedule a support appointment.


For gaming macOS does not seem a great choice. I have friends with macOS and, at least on Steam, there are very few games running on that platform.

Additionally when I was using macOS for work, I had also some unexpected things if I wanted to use anything a bit more special (think packages installed using homebrew, compiling a thing from source, etc.).

So for me the options are: either use a locked device where you can't do anything other than what the designers thought of and if you are lucky it will be good OR use something where you have complete freedom and take the responsibility to tweak when things dont'work. MacOS tries to be the first option (but in my opinion does not succeed as much as it claims to), Linux is the second option (but it is harder than it could be in many cases) and Windows tries to do both (and is worse than the two other alternatives)


It's a CPU bound game

This sounds nothing like my personal experience. I was able for play every single game I tried including

Asetto corsa competizione

Basically all total war games

Cyberpunk

Witcher 3

Dishonored

Mafia 1 and 2

AC origins/odyssey

Civilization 5

Detroit become human

Prey

Crusader kings 3

Stellaris

Metro exodus

And more games I don’t remember

I don’t enable ray tracing or resolution scaling so that might be making the difference on games that have it.

Chances are you can run it if something is at least gold on protondb

Also my gpu is amd.

As a side note, I played cyberpunk for more than 400 hours on max settings without any major issue so saying it doesn’t work because of rtx is very silly imho


> Baldur's Gate 3 Linux build is a slideshow on Nvidia cards

Not at all my experience which makes me question the rest. Also https://www.protondb.com/app/1086940 most people seem quite happy with it so it's not a "me" problem.

Finally the "10-30% drop in average FPS across the board" might be correct, then so what? I understand a LOT of gamers want to have "the best" performance for what they paid good money for but pretty much NO game becomes less fun with even a 30% FPS drop, you just adjust the settings and go play. I think a lot of gamers do get confused and consider maximizing performances itself as a game. It might be fun, and that's 100% OK, but it's also NOT what playing an actual game is about.


Those are mostly reports for the Windows build of Baldur's Gate 3, running through Proton/Wine. He's talking about the newer Linux native build of the game from 3 months ago.

There's a few reports there for the native version of the game: https://www.protondb.com/app/1086940#9GT638Fuyx , with similar Nvidia GPU issues and a fix.


> pretty much NO game becomes less fun with even a 30% FPS drop

I mostly play fighting games. A 7% drop in FPS is more than enough to break the whole game experience as combo rely on frame data. For example Street Fighter 6 is locked at 60 fps. A low punch needs 4 frames to launch and leaves a 4-frames window to land another hit. If there was a 7% drop in FPS, you would miss your combo. Even the tiniest drop in FPS makes the game unplayable.

It's the same for almost every fighting games. I know it's a niche genre, but I'm quite sure it's the same for other genres. It's a complete dealbreaker for competitive play.


> It's a complete dealbreaker for competitive play

Very true, and this is the biggest issue for me when it comes to gaming on Linux. And it's not just raw FPS count. You can usually brute force your way around that with better hardware. (I'm guessing you could probably get a locked 60 in Street Fighter 6 even with a 30% performance loss?). It's things like input lag and stutter, which in my experience is almost impossible to resolve.

If it weren't for competitive shooters, I could probably go all Linux. But for now I still need to switch over to Windows for that.


I played competitive Quake on LAN and online. If your setup, hardware/software, can't handle your configuration you either get a better one (spending money, rollback your OS, etc) or adjust it (lower your configuration, nobody plays competitive gaming for the aesthetics, Quake in such a context is damn ugly and nobody cares).

It's not about a drop in game, it's about being prepared for the game. If you get a 7% drop, or even a .1% drop (whatever is noticeable to you) then you adjust.

To be clear I'm not saying worst performance is OK, I'm saying everybody wants 500FPS for $1 hardware but nobody gets that. Consequently we get a compromise, e.g. pay $2000 for 60FPS and so be it. If you have to pay $2000 + $600 or lower graphics settings to still get 60FPS that's what you do.

PS: FWIW competitive gaming is niche in gaming. Most people might want to compete but in practice most people are not, at least not professionally. It's still an important use case but it's not the majority. Also from my own personal experience I didn't get performance drop.


You're talking about the Proton version, parent was talking about the Linux native build that is optimized for Steam Deck.

The app I most regret losing in the 64-bit transition is Disney Animated [1]. App Of The Year in 2013, and gone completely a few years later...

[1] https://mashable.com/archive/disney-animation-app


I would loosen the memory timings a bit and see if that resolves the ECC errors. x265 performance shouldn't fall since it generally benefits more from memory clock rate than latency.

Also, could you share some relevant info about your processor, mainboard, and UEFI? I see many internet commenters question whether their ECC is working (or ask if a particular setup would work), and far fewer that report a successful ECC consumer desktop build. So it would be nice to know some specific product combinations that really work.


I've been on AM4 for most of the past decade (and still am, in fact), and the mainboards I've personally had in use with working ECC support were:

  - ASRock B450 Pro4
  - ASRock B550M-ITX/ac
  - ASRock Fatal1ty B450 Gaming-ITX/ac
  - Gigabyte MC12-LE0
There's probably many others with proper ECC support. Vendor spec sheets usually hint at properly working ECC in their firmware if they mention "ECC UDIMM" support specifically.

As for CPUs, that is even easier for AM4: Everything that's not based on a APU core (there are some SKUs marketed without iGPU that just have the iGPU part of the APU disabled, such as the Ryzen 5 5500) cannot support ECC. An exception to that rule are "PRO"-series APUs, such as the Ryzen 5 PRO 5650G et al., which have an iGPU, but also support ECC. Main differences (apart from the integrated graphics) between CPU and APU SKUs is that the latter do not support PCIe 4.0 (APUs are limited to PCIe 3.0), and have a few Watts lower idle power consumption.

When I originally built the desktop PC that I still use (after a number of in-place upgrades, such as swapping out the CPU/GPU combo for an APU), I blogged about it (in German) here: https://johannes.truschnigg.info/blog/2020-03-23#0033-2020-0...

If I were to build an AM5 system today, I would look into mainboards from ASUS for proper ECC support - they seem to have it pretty much universally supported on their gear. (Actual out-of-band ECC with EDAC support on Linux, not the DDR5 "on-DIE" stuff.)


Grain is independent frame-to-frame. It doesn't move with the objects in the scene (unless the video's already been encoded strangely). So long as the synthesized noise doesn't have an obvious temporal pattern, comparing stills should be fine.

Regarding aesthetics, I don't think AV1 synthesized grain takes into account the size of the grains in the source video, so chunky grain from an old film source, with its big silver halide crystals, will appear as fine grain in the synthesis, which looks wrong (this might be mitigated by a good film denoiser). It also doesn't model film's separate color components properly, but supposedly that doesn't matter because Netflix's video sources are often chroma subsampled to begin with: https://norkin.org/pdf/DCC_2018_AV1_film_grain.pdf

Disclaimer: I just read about this stuff casually so I could be wrong.


> Grain is independent frame-to-frame. It doesn't move with the objects in the scene (unless the video's already been encoded strangely)

That might seem like a reasonable assumption, but in practice it’s not really the case. Due to nonlinear response curves, adding noise to a bright part of an image has far less effect than a darker part. If the image is completely blown out the grain may not be discernible at all. So practically speaking, grain does travel with objects in a scene.

This means detail is indeed encoded in grain to an extent. If you algorithmically denoise an image and then subtract the result from the original to get only the grain, you can easily see “ghost” patterns in the grain that reflect the original image. This represents lost image data that cannot be recovered by adding synthetic grain.


It sounds like the "scaling function" mentioned in the article may be intended to account for the nonlinear interaction of the noise.


> If you algorithmically denoise an image and then subtract the result from the original to get only the grain, you can easily see “ghost” patterns in the grain that reflect the original image. This represents lost image data that cannot be recovered by adding synthetic grain.

The synthesized grain is dependent on the brightness. If you were to just replace the frames with the synthesized grain described in the OP post instead of adding it, you would see something very similar.


> So long as the synthesized noise doesn't have an obvious temporal pattern, comparing stills should be fine.

The problem is that the initial noise-removal and compression passes still removed detail (that is more visible in motion than in stills) that you aren't adding back.

If you do noise-removal well you don't have to lose detail over time.

But it's much harder to do streaming-level video compression on a noisy source without losing that detail.

The grain they're adding somewhat distracts from the compression blurriness but doesn't bring back the detail.


>The grain they're adding somewhat distracts from the compression blurriness but doesn't bring back the detail.

Instead of wasting bits trying to compress noise, they can remove noise first, then compress, then add noise back. So now there aren't wasted bits compressing noise, and those bits can be used to compress detail instead of noise. So if you compare FGS compression vs non-FGS compression at the same bitrate, the FGS compression did add some detail back.


I imagined that at some point someone would come up with the idea “let’s remove more noise to compress things better and then add it back on the client”. Turns out, it is Netflix (I mean, who else wins so much from saving bandwidth).

Personally I rejected the idea after thinking about it for a couple of minutes, and I’m not yet sure I was wrong.

The challenge with noise is that it is actually cannot be perfectly automatically distinguished and removed from what could be finer details and textures even in a still photo, not to mention high-resolution footage. If removing noise was as simple as that, digital photography would be completely different. If you have removed noise, you can’t just add back missing detail later—if you could, you would not have removed it in the first place (alas, no algorithm is good enough, and even human eye can be faulty).


I'm not saying that the final result is as good as the original.

I'm saying that the final result is better than standard compression at the same bitrate.


That might be true; however, if this takes hold I would be surprised if they choose to keep producing and shipping the tasty grain high fidelity footage.

Considering that NR is generally among the very first steps in development pipeline (as that’s where it is the most effective), and the rest of dynamic range wrangling and colour grading comes on top of it, they might consider it a “waste” to 1) process two times (once with this new extreme NR, once with minimal NR that leaves the original grain), 2) keep around both copies, and especially (the costliest step) to 3) ship that delicious analog noise over Internet to people who want quality.

I mean, how far do we go? It’ll take even less bandwidth to just ship prompts to a client that generates the entire thing on the fly. Imagine the compression ratios…


That argument could be made to reject any form of lossy compression.

Lossy compression enables many use cases that would otherwise be impossible. Is it annoying that streaming companies drive the bitrate overly low? Yes. However, we shouldn't blame the existence of lossy compression algorithms for that. Without lossy compression, streaming wouldn't be feasible in the first place.


> Grain is independent frame-to-frame. It doesn't move with the objects in the scene (unless the video's already been encoded strangely). So long as the synthesized noise doesn't have an obvious temporal pattern, comparing stills should be fine.

Sorry if I wasn't clear -- I was referring to the underlying objects moving. The codec is trying to capture those details, the same way our eye does.

But regardless of that, you absolutely cannot compare stills. Stills do not allow you to compare against the detail that is only visible over a number of frames.


People often assume noise is normal and IID but it usually isn't. It's s fine approximation but isn't the same thing, which is what the parent is discussing.

Here's an example that might help you intuit why this is true.

Let's suppose you have a digital camera and walk towards a radiation source and then away. Each radioactive particle that hits the CCD causes it to over saturate, creating visible noise in the image. The noise it introduces is random (Poisson) but your movement isn't.

Now think about how noise is introduced. There's a lot of ways actually, but I'm sure this thought exercise will reveal to you how some cause noise across frames to be dependent. Maybe as a first thought, think about from sitting on a shelf degrading.


I think this is geared towards film grain noise, which is independent from movement?


It's the same thing. Yes, not related to the movement of the camera, but I thought that would be easier to build your intuition about silver particles being deposited onto film. You make in batches, right?

The point is that just because things are random doesn't mean there aren't biases.

To get much more accurate, it helps to understand what randomness actually is. It is a measurement of uncertainty. A measurement of the unknown. This is even true for quantum processes that are truly random. That means we can't know. But just because we can't know doesn't mean it's completely unknown, right? We have different types of distributions and different parameters in those distributions. That's what we're trying to build intuition about


I think you've missed the point here: the noise in the originals acts as dithering, and increases the resolution of the original video. This is similar to the noise introduced intentionally in astronomy[1] and in signal processing[2].

Smoothing the noise out doesn't make use of that additional resolution, unless the smoothing happens over the time axis as well.

Perfectly replicating the noise doesn't help in this situation.

[1]: https://telescope.live/blog/improve-image-quality-dithering [2] https://electronics.stackexchange.com/questions/69748/using-...


Your first link doesn't seem to be about introducing noise, but removing it by averaging the value of multiple captures. The second is to mask quantizer-correlated noise in audio, which I'd compare to spatial masking of banding artifacts in video.

Noise is reduced to make the frame more compressible. This reduces the resolution of the original only because it inevitably removes some of the signal that can't be differentiated from noise. But even after noise reduction, successive frames of a still scene retain some frame-to-frame variance, unless the noise removal is too aggressive. When you play back that sequence of noise-reduced frames you still get a temporal dithering effect.


Here's[1] a more concrete source, which summarizes dithering in analog to digital converters as follows:

With no dither, each analog input voltage is assigned one and only one code. Thus, there is no difference in the output for voltages located on the same ‘‘step’’ of the ADC’s ‘‘staircase’’ transfer curve. With dither, each analog input voltage is assigned a probability distribution for being in one of several digital codes. Now, different voltages with-in the same ‘‘step’’ of the original ADC transfer function are assigned different probability distributions. Thus, one can see how the resolution of an ADC can be improved to below an LSB.

In actual film, I presume the random inconsistencies of the individual silver halide grains is the noise source, and when watching such a film, I presume the eyes are doing the averaging through persistence of vision[2].

In either case, a key point is that you can't bring back any details by adding noise after the fact.

[1]: https://www.ti.com/lit/an/snoa232/snoa232.pdf section 3.0 - Dither

[2]: https://en.wikipedia.org/wiki/Persistence_of_vision


One thing worth noting is that this extra detail from dithering can be recovered when denoising by storing the image to higher precision. This is a lot of the reason 10 bit AV1 is so popular. It turns out that by adding extra bits of image, you end up with an image that is easier to compress accurately since the encoder has lower error from quantization.


The AR coefficients described in the paper are what allow basic modeling of the scale of the noise.

> In this case, L = 0 corresponds to the case of modeling Gaussian noise whereas higher values of L may correspond to film grain with larger size of grains.


The article points out the masking effect of grain, which hides the fake-looking compression artifacts, and also the familiarity/nostalgia aspect. But I will offer an additional explanation.

Look around you: nearly all surfaces have some kind of fine texture and are not visually uniform. When this is recorded as video, the fine texture is diminished due to things like camera optics, limited resolution, and compression smoothing. Film grain supplies some of the high frequency visual stimulus that was lost.

Our eyes and brains like that high frequency stimulation and aren't choosy about whether the exact noise pattern from the original scene is reproduced. That's why the x265 video encoder (which doesn't have grain synthesis since it produces H.265 video) has a psy-rd parameter that basically says, "try to keep the compressed video as 'energetic' as the original, even if the energy isn't in the exact same spot", and even a psy-rdoq parameter that says, "prefer higher 'energy' in general". These parameters can be adjusted to make a compressed video look better without needing to store more data.


Idle power almost always goes up with higher resolutions and refresh rates [1], and AMD cards typically raise their idle clockspeeds more drastically than Nvidia cards [2] when resolution or refresh rate increases. The OP uses an 8K 60Hz screen so 45W seems reasonable.

[1] TechPowerUp and ComputerBase have the most thorough collections of power consumption measurements, but compare them to each other and you'll see how much it depends on the test setup.

[2] Nvidia's latest 5000 series cards buck this trend. The 9070 XT's direct competitor, the 5070 Ti, has especially high idle consumption for no clear reason.


True, but I tested the Radeon RX9070’s power consumption with a 4K monitor.

  * ASUS, builtin-GPU@4K: ≈39W
  * ASUS + nVidia GF4070@4K idle: ≈50W
  * ASUS + radeon RX9070 (Linux 6.15): ≈80W


Puffer channel changes are near-instant. https://puffer.stanford.edu/


I clicked because of the bait-y title, but ended up reading pretty much the whole post, even though I have no reason to be interested in ZFS. (I skipped most of the stuff about logs...) Everything was explained clearly, I enjoyed the writing style, and the mobile CSS theme was particularly pleasing to my eyes. (It appears to be Pixyll theme with text set to the all-important #000, although I shouldn't derail this discussion with opinions on contrast ratios...)

For less patient readers, note that the concise summary is at the bottom of the post, not the top.


That being:

> As we’ve seen from the last 7000+ words, the overheads are not trivial. Even with all these changes, you still need to have a lot of deduplicated blocks to offset the weight of all the unique entries in your dedup table. [...] what might surprise you is how rare it is to find blocks eligible for deduplication are on most general purpose workloads.

> But the real reason you probably don’t want dedup these days is because since OpenZFS 2.2 we have the BRT (aka “block cloning” aka “reflinks”). [...] it’s actually pretty rare these days that you have a write operation coming from some kind of copy operation, but you don’t know that came from a copy operation. [...] [This isn't] saving as much raw data as dedup would get me, though it’s pretty close. But I’m not spending a fortune tracking all those uncloned and forgotten blocks.

> [Dedup is only useful if] you have a very very specific workload where data is heavily duplicated and clients can’t or won’t give direct “copy me!” signal

The section labeled "summary" imo doesn't do the article justice by being fairly vague. I hope these quotes from near the end of the article give a more concrete idea of why (not) use it


> offset the weight of all the unique entries in your dedup table

Didn't read the 7000 words... But isn't the dedup table in the form of a bunch of bloom filters so the whole dedup table can be stored with ~1 bit per block?

When you know there is likely a duplicate, you can create a table of blocks where there is a likely duplicate, and find all the duplicates in a single scan later.

That saves having massive amounts of accounting overhead storing any per-block metadata.


It scrolls horizontally :(


It's because of this element in one of the final sections [1]:

    <code>kstat.zfs.<pool>.misc.ddt_stats_<checksum></code>
Typesetting code on a narrow screen is tricky!

[1] https://despairlabs.com/blog/posts/2024-10-27-openzfs-dedup-...


Not on Firefox on Android it doesn't.


It does in chrome on android (1080 px wide screen, standard ppi & zoom levels) but not by enough that you see it on the main body text (scrolling just reveals more margin), so you might find it does for you too but not enough that you noticed.

As it is scrolling here, though inconsequentially, it might be bad on a smaller device with less screen and/or other ppi settings.


Chips and Cheese most reminds me of the long gone LostCircuts. Most tech sites focus on the slate of application benchmarks, but C&C writes, and LC wrote, long form articles about architecture, combined with subsystem micro-benchmarks.


2D animation traced over live action is called rotoscoping. Many of Disney's animated movies from the Walt Disney era used rotoscoping, so I don't think it's fair to say it results in poor quality.

https://en.wikipedia.org/wiki/List_of_rotoscoped_works#Anima...


The comment was about naive tracing. When Disney used rotoscoping they had animators draw conforming to a character model on top of the live action pose.

The experienced animator and inbetweeners knew how to produce smooth line motion, and the live action was used for lifelike pose, movement, etc. It wasn’t really tracing.

There’s examples of this in the Disney animation books, the finished animation looks very different from the live actors, but with the same movement.


On the other side of the same coin, when animating VFX for live action, animation which looks "too clean" is also a failure mode. You want to make your poses a little less good for camera, introduce a little bit of grime and imperfection, etc.

Animation is a great art and it takes a lot of skill to make things look the way they ought to for whatever it is you are trying to achieve.

Most animators don't like the "digital makeup" comparison (because it's often used in a way which feels marginalizing to their work on mocap-heavy shows), but if you interpret it in the sense that makeup makes people look the way they are "supposed to" I think it's a good model for understanding why rotoscope and motion capture don't yet succeed without them.


Rotoscoping has its place. It can save a lot of time/money for scenes with complex motion and can produce good results, but overreliance on it does tend to produce worse animation since it can end up being constrained to just what was captured on film. Without it, animators are more free to exaggerate certain motions, or manipulate the framerate, or animate things that could never be captured on camera in the first place. That kind of freedom is part of what makes animation such a cool medium. Animation would definitely be much worse off if rotoscoping was all we had.


"Animation would definitely be much worse off if rotoscoping was all we had." Yeah, then it wouldn't be animation anymore.


I mean, rotoscoping is still animation, but it's just one technique/tool of the trade. I thought it was used well in Undone, and I enjoyed The Case of Hana & Alice


Rotoscoping was utilized for some difficult shots. Mostly live action was used for reference, not directly traced, Fleischer style. I've never seen rotoscoping that looked so masterful as Snow White and similar golden age films.

https://www.youtube.com/watch?v=smqEmTujHP8


A Scanner Darkly is rotoscoped

https://youtu.be/l1-xKcf9Q4s


A Scanner Darkly is more like a manual post-processing effect than animation.


The creators reference that as rotoscopy very often, there are some cool details (not much related to this discussion) here https://blogs.iu.edu/establishingshot/2022/03/28/rotoscoping...


I'm not saying it isn't really roto, I'm saying it's not really used here as part of an animation pipeline so much as in a compositing pipeline.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: