Deep learning networks prefer the human voice -- just like us

Deep learning networks prefer the human voice -- just like us

3 years ago
Anonymous $4BDEsVAtYS

https://www.sciencedaily.com/releases/2021/04/210406131947.htm

A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.

"To understand why this finding is significant," said Lipson, James and Sally Scapa Professor of Innovation and a member of Columbia's Data Science Institute, "It's useful to understand how neural networks are usually programmed, and why using the sound of the human voice is a radical experiment."

Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
8 minutes ago
Reputation
0
Spam
0.000
Last Seen
10 minutes ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000
Last Seen
36 minutes ago
Reputation
0
Spam
0.000
Last Seen
12 minutes ago
Reputation
0
Spam
0.000
Last Seen
2 hours ago
Reputation
0
Spam
0.000
Last Seen
about an hour ago
Reputation
0
Spam
0.000