Apple Defends Its Anti-Child Abuse Imagery Tech After Claims of ‘Hash Collisions’

Apple Defends Its Anti-Child Abuse Imagery Tech After Claims of ‘Hash Collisions’

3 years ago
Anonymous $drS9DEX_Sj

https://www.vice.com/en_us/article/wx5yzq/apple-defends-its-anti-child-abuse-imagery-tech-after-claims-of-hash-collisions

Researchers claim they have probed a particular part of Apple's new system to detect and flag child sexual abuse material, or CSAM, and were able to trick it into saying two images that were clearly different shared the same cryptographic fingerprint. But Apple says this part of its system is not supposed to be secret, that the overall system is designed to account for this to happen in general, and that the analyzed code is not the final implementation that will be used with the CSAM system itself and is instead a generic version.

On Wednesday, GitHub user AsuharietYgvar published details of what they claim is an implementation of NeuralHash, a hashing technology in the anti-CSAM system announced by Apple at the beginning of August. Hours later, someone else claimed to have been able to create a collision, meaning he tricked the system into giving two different images the same hash. Ordinarily, hash collisions mean that one file could appear to be another to a system. For example, perhaps a piece of malware shares a hash with an innocuous file, so an anti-virus system flags the banal file thinking it poses a threat to the user.  

Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
3 hours ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000