>> This is an outright lie. The only honest answer is no.
> Are you sure about that?,
Yes.
> I'm not... And all the news so far reinforces that oppinion...
There are no news articles that explain how anyone will be falsely accused for having pictures of their own baby.
> Perceptual filter there seems pretty poor
> in terms of collision resistance
I don’t think you know anything about how poor the filter is. What is the false positive rate on randomly selected photos?
The system is even resistant against intentionally created false positives.
Here is the relevant paragraph from Apple’s documentation:
“as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possi- bility that the match threshold was exceeded due to non-CSAM images that were ad- versarially perturbed to cause false NeuralHash matches against the on-device en- crypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.”
> There are no news articles that explain how anyone will be falsely accused for having pictures of their own baby.
Umm... hash collisions that everyone keeps warning about is not enough?, all the discussions so far, I'll just go ahead and assume your comment here is in bad faith.
> The system is even resistant against intentionally created false positives.
Famous last words... Here's one of the top posts for reddit.com/r/apple
It took a few days after extracting the model to show how flawed it is... Apple's only 'security' feature here was obscurity...
It's so broken the person doing analysis above stopped as Apple will only change the hash function to include his pictures as training data instead of fixing the whole system.
"Finding a SHA1 collision took 22 years, and there are still no effective preimage attacks against it. Creating the NeuralHash collider took a single week."
That comment is incoherent, and not an explanation of a vulnerability.
> Also funniest bit from that on how broken it is
"Finding a SHA1 collision took 22 years, and there are still no effective preimage attacks against it. Creating the NeuralHash collider took a single week."
This quote demonstrates why we can reject that comment.
They appear to know the difference between a perceptual hash and a cryptographic hash from their earlier statements, and yet here they compare them as if they are expected to behave in a similar way.
Nobody who understands how perceptual hashing works would expect there not to be collisions or to think there was a meaningful comparison with SHA-1. The system doesn’t rely on the hash to behave like a cryptographic hash because it is not one.
Either that commenter is confused, or being deliberately misleading. Let’s assume they are just confused.
I have to assume you don’t know the difference between a cryptographic hash and a perceptual hash otherwise you wouldn’t have quoted this.
Are you sure about that?, I'm not... And all the news so far reinforces that oppinion...
Getting falsely accused of something like this will ruin you even if in the end you win.
Here's apple fucking up human review and destroying a teens life https://www.theregister.com/2021/05/29/apple_sis_lawsuit/
Imagine that with CSAM... Perceptual filter there seems pretty poor in terms of collision resistance