Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is an outright lie. The only honest answer is no.

Are you sure about that?, I'm not... And all the news so far reinforces that oppinion...

Getting falsely accused of something like this will ruin you even if in the end you win.

Here's apple fucking up human review and destroying a teens life https://www.theregister.com/2021/05/29/apple_sis_lawsuit/

Imagine that with CSAM... Perceptual filter there seems pretty poor in terms of collision resistance



>> This is an outright lie. The only honest answer is no.

> Are you sure about that?,

Yes.

> I'm not... And all the news so far reinforces that oppinion...

There are no news articles that explain how anyone will be falsely accused for having pictures of their own baby.

> Perceptual filter there seems pretty poor > in terms of collision resistance

I don’t think you know anything about how poor the filter is. What is the false positive rate on randomly selected photos?

The system is even resistant against intentionally created false positives.

Here is the relevant paragraph from Apple’s documentation:

“as an additional safeguard, the visual derivatives themselves are matched to the known CSAM database by a second, independent perceptual hash. This independent hash is chosen to reject the unlikely possi- bility that the match threshold was exceeded due to non-CSAM images that were ad- versarially perturbed to cause false NeuralHash matches against the on-device en- crypted CSAM database. If the CSAM finding is confirmed by this independent hash, the visual derivatives are provided to Apple human reviewers for final confirmation.”

https://www.apple.com/child-safety/pdf/Security_Threat_Model...

...


> There are no news articles that explain how anyone will be falsely accused for having pictures of their own baby.

Umm... hash collisions that everyone keeps warning about is not enough?, all the discussions so far, I'll just go ahead and assume your comment here is in bad faith.

> The system is even resistant against intentionally created false positives.

Famous last words... Here's one of the top posts for reddit.com/r/apple

https://old.reddit.com/r/apple/comments/p930wu/i_wont_be_pos...

Here's a really high quality collision: https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX//issu...

Here's 2 totally different images off by a single BIT: https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX//issu...

Here's a dog and a kid colliding: https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX//issu...

It took a few days after extracting the model to show how flawed it is... Apple's only 'security' feature here was obscurity...

It's so broken the person doing analysis above stopped as Apple will only change the hash function to include his pictures as training data instead of fixing the whole system.

Are you still convinced?

Having a second 'perceptual' hash doesn't really add much value... I'm not an expert, here's a better view on why: https://news.ycombinator.com/item?id=28243031

Also funniest bit from that on how broken it is

"Finding a SHA1 collision took 22 years, and there are still no effective preimage attacks against it. Creating the NeuralHash collider took a single week."


>> There are no news articles that explain how anyone will be falsely accused for having pictures of their own baby.

> Umm... hash collisions that everyone keeps warning about is not enough?

No they are not enough. Hash collisions alone don’t cause the system to detect CSAM, even if they are generated intentionally.

If you don’t know this, then you simply don’t understand how the system works.

> all the discussions so far, I'll just go ahead and assume your comment here is in bad faith.

What part is in bad faith?

>> The system is even resistant against intentionally created false positives.

> Famous last words...

All of those links show the same thing. That it’s possible to manufacture hash collisions.

Nobody is debating that point.

Not one of those links explain how a photo of your baby would trigger the system.

> Are you still convinced?

Yes.

> Having a second 'perceptual' hash doesn't really add much value...

Does it not? Can you explain why it doesn’t?

> I'm not an expert,

I guess that means you can’t explain what you are saying because you don’t understand it.

> here's a better view on why: https://news.ycombinator.com/item?id=28243031

That comment is incoherent, and not an explanation of a vulnerability.

> Also funniest bit from that on how broken it is "Finding a SHA1 collision took 22 years, and there are still no effective preimage attacks against it. Creating the NeuralHash collider took a single week."

This quote demonstrates why we can reject that comment.

They appear to know the difference between a perceptual hash and a cryptographic hash from their earlier statements, and yet here they compare them as if they are expected to behave in a similar way.

Nobody who understands how perceptual hashing works would expect there not to be collisions or to think there was a meaningful comparison with SHA-1. The system doesn’t rely on the hash to behave like a cryptographic hash because it is not one.

Either that commenter is confused, or being deliberately misleading. Let’s assume they are just confused.

I have to assume you don’t know the difference between a cryptographic hash and a perceptual hash otherwise you wouldn’t have quoted this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: