Why the EU is unlikely to ban facial recognition any time soon

Why the EU is unlikely to ban facial recognition any time soon

4 years ago
Anonymous $-riAjkQg_1

https://tech.newstatesman.com/gdpr/eu-facial-recognition-ban

The debate around the ethical use of facial recognition (FR) continues to gather momentum. A temporary ban on the use of FR in public spaces is described in a draft EU White Paper on Artificial intelligence (AI) as one of the options for a future regulatory framework. It is unlikely that such a moratorium will be put in place any time soon, but Brussels is leading a discussion around FR that goes beyond the technology itself and focuses on the need to empower individuals in the digital era and limit the potential for AI systems to make inappropriate decisions.

FR, which makes decisions based on image interpretation, poses risks to fundamental rights, including discrimination and privacy. FR training data is often incomplete or unrepresentative of the general population. Therefore, FR based on that information will contain inherent biases. While human activity is also prone to biases, AI systems operate at scale and are not subject to any social control mechanism. A study from MIT Media Lab showed that the accuracy of FR technology differs across gender and races. The darker the skin, the more errors arise, up to 35 per cent for images of darker-skinned women. There are significant risks that FR used in law enforcement and border, airport, and retail security will be unreliable.

Why the EU is unlikely to ban facial recognition any time soon

Jan 29, 2020, 3:18pm UTC
https://tech.newstatesman.com/gdpr/eu-facial-recognition-ban > The debate around the ethical use of facial recognition (FR) continues to gather momentum. A temporary ban on the use of FR in public spaces is described in a draft EU White Paper on Artificial intelligence (AI) as one of the options for a future regulatory framework. It is unlikely that such a moratorium will be put in place any time soon, but Brussels is leading a discussion around FR that goes beyond the technology itself and focuses on the need to empower individuals in the digital era and limit the potential for AI systems to make inappropriate decisions. > FR, which makes decisions based on image interpretation, poses risks to fundamental rights, including discrimination and privacy. FR training data is often incomplete or unrepresentative of the general population. Therefore, FR based on that information will contain inherent biases. While human activity is also prone to biases, AI systems operate at scale and are not subject to any social control mechanism. A study from MIT Media Lab showed that the accuracy of FR technology differs across gender and races. The darker the skin, the more errors arise, up to 35 per cent for images of darker-skinned women. There are significant risks that FR used in law enforcement and border, airport, and retail security will be unreliable.