The issue with copy machines modifying glyphs isn’t a problem with all algorithms; Really, only that one. Instead of just discarding data like a lossy algorithm, it would notice similar sections of the image and make them the same.
Yeah, I'll admit that specific example wasn't the most relevant. Really I just want to be able to scan papers and then be confident enough to destroy them without having to scrutinize the output. Rather than committing to specific post-processing I settled on just keeping full masters of the 300dpi greyscale. Even at 5M/page, that's just 100GB for 20k pages.
I don't think PNG provided meaningful compression, due to the greyscale. If FLIF didn't exist, I certainly could have used PNG, for being nicer than PGM. But using FLIF seemed like a small compromise to pay for going lossless.
JPEG would have sufficed, but JPEG artifacts have always bugged me. I also considered JPEG2000 for a bit, which left me with a concern of how stable/future-proof are the actual implementations. Lossless is bit-perfect, so that concern is alleviated.
Also, why not PNG?