How artificial intelligence can explain its decisions

How artificial intelligence can explain its decisions

2 years ago
Anonymous $Dcz6_RW03I

https://www.sciencedaily.com/releases/2022/09/220902103300.htm

For the study, bioinformatics scientist Axel Mosig cooperated with Professor Andrea Tannapfel, head of the Institute of Pathology, oncologist Professor Anke Reinacher-Schick from the Ruhr-Universität's St. Josef Hospital, and biophysicist and PRODI founding director Professor Klaus Gerwert. The group developed a neural network, i.e. an AI, that can classify whether a tissue sample contains tumour or not. To this end, they fed the AI a large number of microscopic tissue images, some of which contained tumours, while others were tumour-free.

"Neural networks are initially a black box: it's unclear which identifying features a network learns from the training data," explains Axel Mosig. Unlike human experts, they lack the ability to explain their decisions. "However, for medical applications in particular, it's important that the AI is capable of explanation and thus trustworthy," adds bioinformatics scientist David Schuhmacher, who collaborated on the study.

Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
22 minutes ago
Reputation
0
Spam
0.000
Last Seen
4 hours ago
Reputation
0
Spam
0.000
Last Seen
10 minutes ago
Reputation
0
Spam
0.000
Last Seen
14 minutes ago
Reputation
0
Spam
0.000
Last Seen
4 minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000