To err is human – is that why we fear machines that can be made to err less? | John Naughton
https://www.theguardian.com/commentisfree/2019/dec/14/err-is-human-why-fear-machines-made-to-err-less-algorithmic-bias
One of the things that really annoys AI researchers is how supposedly “intelligent” machines are judged by much higher standards than are humans. Take self-driving cars, they say. So far they’ve driven millions of miles with very few accidents, a tiny number of them fatal. Yet whenever an autonomous vehicle kills someone there’s a huge hoo-ha, while every year in the US nearly 40,000 people die in crashes involving conventional vehicles.
Likewise, the AI evangelists complain, everybody and his dog (this columnist included) is up in arms about algorithmic bias: the way in which automated decision-making systems embody the racial, gender and other prejudices implicit in the data sets on which they were trained. And yet society is apparently content to endure the astonishing irrationality and capriciousness of much human decision-making.