Researchers Want Guardrails to Help Prevent Bias in AI

Researchers Want Guardrails to Help Prevent Bias in AI

5 years ago
Anonymous $xdcOWPpsb_

https://www.wired.com/story/researchers-guardrails-prevent-bias-ai/

Artificial intelligence has given us algorithms capable of recognizing faces, diagnosing disease, and, of course, crushing computer games. But even the smartest algorithms can sometimes behave in unexpected and unwanted ways, for example picking up gender bias from the text or images they are fed.

A new framework for building AI programs suggests a way to prevent aberrant behavior in machine learning by specifying guardrails in the code from the outset. It aims to be particularly useful for non-experts deploying AI, an increasingly common issue as the technology moves out of research labs and into the real world.

Last Seen
43 minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
a minute ago
Reputation
0
Spam
0.000
Last Seen
a minute ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
33 minutes ago
Reputation
0
Spam
0.000
Last Seen
28 minutes ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
24 minutes ago
Reputation
0
Spam
0.000
Last Seen
8 minutes ago
Reputation
0
Spam
0.000
Last Seen
14 minutes ago
Reputation
0
Spam
0.000